Hold Your Skepticism About Google's Robot-Cars

An automated car crashed today, but it was a human's fault

This article is from the archive of our partner .

Today one of Google's self-driving cars got into an accident, reported Jalopnik's Justin Hyde. "This photo of what looks like a minor case of Prius-on-Prius vehicular violence may actually be a piece of automotive history: the first accident caused by Google's self-driving car." Today's accident would've represented a historical moment, justifying some of the skepticism people have had of the cars, but it turns out that a human had been at the wheel during the crash. Hyde updated their post with the following statement from Google, via Business Insider. "Safety is our top priority. One of our goals is to prevent fender-benders like this one, which occurred while a person was manually driving the car." Turns out the cars maintain their pristine safety record: a human did it.

Since they hit roads in Nevada and California last year, Google's autonomous vehicles have driven thousands of miles with little human intervention and have never, not once, gotten into a crash. Compare that with the 10.2 million accidents humans caused 2008, according to U.S. census bureau, the self-driven machines have a great track record. Tobots are just more accurate and therefore than humans: they "react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue," explains the New York Times' John Markoff--they also don't text and drive.

Yet, at the first sign of malfunction Hyde wasted no time in criticizing the technology: "This is precisely why we're worried about self-driving cars. Perhaps the complicated set of lasers and imaging systems that Google chief autonomous car researcher Sebastian Thrun called 'the perfect driving mechanism' thought it was just looking at its shadow."

Others have expressed their concerns for the automated car, even Google. Scientific American's Nick Chambers.   writes, "Even Google admits that they have no good way for dealing with these unexpected moments yet, which is why every test vehicle has two backup humans on board to monitor and take over when the car reacts strangely." And having a non-person at the wheel presents some legal issues, like who should get the ticket, explains Markoff: "Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would?"

But today's event, and a year of on-the-road experience, underscores that these cars might actually be safer than people. "It turns out that Google should have just let the robot keep driving," commented The Next Web's Matthew Panzarino. Even after Google's statement, Jalopnik remains skeptical "Of course, how would we actually know whether it was being manually-driven at the time? Now that we've got confirmation that this was one of Google's self-driven cars, it's high time we got a closer look at the details of how they're trying to make it happen — and any evidence that this actually was being driven by a real, live human being." But, perhaps they should accept that the computer has won today.

This article is from the archive of our partner The Wire.