Cars will be driving themselves more often in the future. Whether that happens through advanced "driver assistance" and cruise control features or through a more complete, Google-y type of autonomy, the percentage of time cars spend under full human control will decline over the next decade.
Handing over control of the world's cars to computers makes their hackability (among other things) a key concern, and as I reported yesterday, the National Science Foundation has given Utah State's Ryan Gerdes $1.2 million to study the problem.
In my story, I cited a 2010 paper by University of Washington and University of California, San Diego researchers, who said they found many security holes in today's on-board computer systems.
After I published the story, an automotive engineer sent me an email that's worth sharing. He makes the case that cars may be less prone to hacking than the 2010 paper or Gerdes' work makes it seem. The engineer asked that I keep his name out of the post, so that he could comment freely about his industry.
I wanted to comment on your article about hacking autonomous cars. While it is wise for researchers and manufacturers to increase security, I feel that currently the risk is overblown.
I work for a consulting company (not a household name but big in certain circles of the industry) in engine development. I’ve done a lot of work with the software and the hardware in a lab and have also worked in vehicles. I’ve had full access to libraries of information that could be used to do the attacks from the 2010 University of Washington paper.
The expertise, tools, and time required to reverse engineer the network messages (CAN library) and send the right messages for attacks is very prohibitive. The attacks described in that paper are extremely crude. I say that as someone who has worked with a lot of aftermarket tools in my hobbies, as well as my work on the OEM side. [OEMs are suppliers who make parts for the car makers.]
In that paper they took a shotgun approach of just seeing what messages they could fire off to cause trouble, all on the least secure part of the vehicle—the communication between modules, rather than inside the guts of the modules themselves. It took a lot of time to figure it out, and the CAN message library changes with model year and with manufacturer. It’s fragmented like an Android operating system.
For autonomous vehicles, “reprogramming” in a more sophisticated way like described there, is extremely difficult. Having some intelligent speeding up and slowing down is difficult, because inside the modules are many failsafe diagnostics called “rationality monitors” that can detect a messed up signal being fed to a sensor.
In your article there is a comparison to programmable battle robots. The control logic for such things is very, very crude compared to a modern car. Cars are more and more physics and model-based—you have to change 10 physics-based look up tables to make it do what you want. It’s not like the early days of computer controls.
Let me put it to you this way. Nobody knows how the software really works. Everyone inside the company has their little area that they are experts in, but the knowledge is so spread out that it is really hard to pull this kind of thing off. It’s spread among so many people, that the effort and skill required to do what you describe is extremely unlikely at this time and in any near future as 2017-2019 models are in development right now. However, if it were to happen, it would require some coordinated use of the network vulnerabilities—which is a crude way of doing things, and is more likely to disable than to effectively control a car. The aftermarket knowledge of these data networks is very limited, mostly in enthusiast circles who want to make their own car go faster rather than cause trouble.
I hope you read all this. You had a very good article, I just wanted to present a point of view from someone who has been on both the OEM and aftermarket/reverse engineering side.