Everybody knew this day would come.
After several years and more than 1.4 million miles of test driving, Google’s perfect streak came to an end. In all that time, the tech giant’s self-driving cars had been in fewer than two dozen minor accidents, none of which were caused by the autonomous vehicles.
On February 14, that changed. One of Google’s self-driving Lexus SUVs, traveling about 2 miles per hour, side-swiped a city bus in Mountain View, California, according to several reports. No one was injured. The Google car had been trying to merge in front of the bus to avoid sandbags on the roadway ahead.
Google said in a statement that “we clearly bear some responsibility, because if our car hadn’t moved, there wouldn’t have been a collision.” The test driver in the car—Google has human drivers ready to take over when its cars are in autonomous mode—incorrectly believed the bus was going to slow or stop to let Google’s car merge. Apparently the self-driving car thought so, too.
This was bound to happen sooner or later. Google’s self-driving cars are logging some 15,000 autonomous miles per week on public streets. And while the accident will certainly offer a learning experience for Google’s engineers, it doesn’t actually call into question the larger goal of building reliable, safe self-driving cars. I suspect this fender bender won’t change public opinion much, if at all.
There’s still something that might, though.
I’ve interviewed dozens of computer scientists, artificial-intelligence researchers, engineers, and other thinkers focused on self-driving cars in the past several months, and almost all of them bring up a universal worry: The first fatal accident in which a self-driving car is to blame.
If driverless cars are to deliver on their promise, and really replace the majority of human-driven cars on the roads, a fatal accident will eventually happen. And a fatal accident could doom the entire effort.
How the public responds to the first human deaths caused by self-driving cars will ultimately determine the technology’s trajectory.
There’s some precedent for all this, of course. It’s not as though the car as we know it today was thwarted by human deaths. The first recorded traffic fatality in the United States occurred in 1899, in New York City, when a man stepping off a trolley was struck by a taxi.
The three decades that followed were chaotic and deadly. Scholars and justices debated whether the automobile was, perhaps, inherently evil. By the 1920s, cars were causing so many deaths that people in cities like New York and Detroit began throwing parades in an attempt to underscore the need for traffic safety. Tow trucks would haul smashed, totaled vehicles along the course of the parade. From The Detroit News:
Some wrecks featured mannequin drivers dressed as Satan and bloody corpses as passengers. Children crippled from accidents rode in the back of open cars waving to other children watching from sidewalks. Washington, D.C., and New York City held parades including 10,000 children dressed as ghosts, representing each a death that year. They were followed by grieving young mothers who wore white or gold stars to indicate they'd lost a child.
Eventually, traffic laws and other safety features—stop lights, brightly painted lanes, speed limits—were standardized. And car-safety technology improved, too. Vehicles got shatterproof windshields, turn signals, parking brakes, and eventually seat belts and airbags. In 1970, about 60,000 people died each year on American roads. By 2013, the number of annual traffic fatalities had been cut almost in half.
Self-driving cars could dramatically reduce the number of deaths yet again. If, as many researchers believe, self-driving cars end up shrinking traffic fatalities by up to 90 percent this century, driverless cars could save as many lives as anti-smoking efforts have.
But none of the promise of this technology takes away from the fact that autonomous vehicles still face a thicket of difficult ethical and regulatory uncertainties. One of the biggest questions of all is social in nature: How will the public accept a car that is 100 percent autonomous but not 100 percent safe—even if it’s far safer than a human-driven alternative?
It isn’t Google’s recent accident, but a more serious one that will reveal the answer.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.