Your car – and other IoT devices – are trying to kill you

The Internet of Things (IoT) is part of a sci-fi lover’s dream, where the world is interconnected with smart homes and cars. The IoT is convenience in one’s pocket, with apps controlling car locks and household thermostats from miles away. But in the rush for the latest commodity, as the technology grows more advanced, the security features may not have kept up the pace.

There will be an estimated 23-30 billion IoT devices by 2020. That ranges across different industries and products, from washing machines to security doors – both of which have already had incidents we’ve covered.

Billy Rios, founder of WhiteScope, held a presentation on the dangers of IoT devices at Black Hat, a cybersecurity convention held in Las Vegas. The main targets Rios was concerned with were devices that had access to the Internet, were open to the public or otherwise easily accessible and could lead to the harm of a person if exploited.

But what can be done to shore up these devices, and why hasn’t the industry at large done more to protect the average user?

One of the biggest deterrents for researchers is the lack of available funding that’s needed to exhaustively test these devices in safe conditions. One example of this is two white hat researchers who were looking for vulnerabilities in smart cars. The researchers had previously conducted two tests on hacking cars, but both reports were written off by the manufacturers. The researchers needed physical access to hack the cars, so the exploits weren’t dangerous enough.

But those early exploits were things like making the digital speedometer give out the wrong reading or messing with GPS systems. And while one exploit in the software required physical access, others didn’t.

So the researchers sought to redo the test, this time relying completely on remote hacking of a 2014 Jeep Cherokee. The car had a slew of “smart” features such as lane assist and brake guidance. Not the kinds of things anyone would want a remote hacker to have access to.

But back to the funding – in the report, the researchers joked that it took several months worth of selling plasma in order to get the software and hardware required to safely test the exploits in the Jeep. The total cost – not including the price of buying the Jeep – was over $15,000.

It might be a lot for two research paper authors to afford, but hackers with an agenda might not scoff at the price. In the end, the report issued several remote ways hackers could crack the Jeep’s digital systems and disrupt all of those smart features. It forced a recall of 1.4 million vehicles, as well as changes to the Sprint carrier network that serviced some of those features.

But before the third report, vendors had refuted the authors’ research. This is why vendors can’t necessarily be trusted to fix the problem either, refuting costly fixes with: “that’s a feature, not a bug”, “not a practical method of attack”, “system doesn’t work in the way we described”, or “vulnerable code not reachable by normal users.”

Rios also conducted research for his presentation at Black Hat, but his involved a car wash rather than the cars themselves. In his research, the car wash’s programming had two user accounts with vulnerabilities. There was the owner account, which could be cracked for limitless free car washes, and the engineering control account.

The latter could be exploited to cause mechanical errors with the wash itself, controlling doors and mechanical arms within the wash. This is where another disparity with the reporting system comes in.

The first exploit, where supplies are essentially stolen in the form of free car washes, was given a 9.7 CVS threat rating. The second received a CVS score of 7.1, but that exploit could result in the serious harm or death of a person.

As the technology moves forward, it’s clear that several systems need to catch up with how these exploits are tested safely, how they’re addressed by vendors and how they’re then rated by online researchers, with consideration of the potential impact on a human.