We've already seen that conventional cars can be vulnerable to attacks by hackers. But it seems that self-driving vehicles may be equally vulnerable.
It's possible to trick a self-driving Google car into stopping or taking evasive action using around $60 worth of hardware according to a leading security researcher.
Jonathan Petit, Principal Scientist for Security Innovation, Inc has demonstrated that it's possible to trick the car’s sensors using a laser pen and a pulse generator -- which could be replaced by something simpler like a Raspberry Pi or an Arduino.
The vulnerability lies with the car's roof-mounted 'eye' which uses a Light Detection and Ranging (LIDAR) system to supplement its radar and cameras to create a 3D map of its surroundings using a laser and allow it to detect potential hazards.
Using the laser pointer system the car can be fooled into thinking there are objects alongside or ahead of it, forcing it to slow down or stop. Petit described a proof of concept attack in a paper written while he was a research fellow in the University of Cork's Computer Security Group.
During tests, he was able to trick the sensors into seeing 'ghost' vehicles or pedestrians from a distance of 330ft (100m). Although LIDAR works on private frequencies, Petit was able to record and imitate the pulses it generates to create fake objects.
Google isn't the only company to use LIDAR, with Mercedes, Audi and Lexus all having experimented with similar systems. In an interview with IEEE Spectrum Petit argues that it is never too early to start thinking about security. "There are ways to solve it," he says. "A strong system that does misbehavior detection could cross-check with other data and filter out those that aren't plausible. But I don’t think car makers have done it yet. This might be a good wake-up call for them".
Petit's paper is available to download and he'll be presenting his findings at Black Hat Europe in Amsterdam next month.