On Sunday, the inevitable happened: An autonomous vehicle struck and killed someone. In Arizona, a woman police identified as Elaine Herzberg was crossing the street with her bicycle when a self-driving Uber SUV smashed into her.
Tempe police reported in their preliminary investigation that the vehicle was traveling at 40 miles per hour. Uber has suspended its self-driving car program in response.
This is the second death in the United States caused by a self-driving car, and it’s believed to be the first to involve a pedestrian. It’s not the first accident this year, nor is this the first time that a self-driving Uber has caused a major vehicle accident in Tempe: In March 2017, a self-driving Uber SUV crashed into one other car and flipped over on the highway.* As the National Transportation Safety Board opens an inquiry into the latest crash, it’s a good time for a critical review of the technical literature of self-driving cars. This literature reveals that autonomous vehicles don’t work as well as their creators might like the public to believe.
A self-driving car is like a regular car, but with sensors on the outside and a few powerful laptops hidden inside. The sensors, which are GPS, LIDAR, and cameras, transmit information back to the car’s computer system. The best way to imagine the perspective of a self-driving car is to imagine you are driving in a 1980s-style first-person driving video game. The world is a 3-D grid with x, y, and z coordinates. The car moves through the grid from point A to point B, using highly precise GPS measurements gathered from nearby satellites. Several other systems operate at the same time. The car’s sensors bounce out laser radar waves and measure the response time to build a “picture” of what is outside.