Even when humans are required to stay completely engaged with the task of driving, many of them don’t. Many people don’t keep their foot hovering above the brake when cruise control is on, for instance. Or they try to multitask while driving. Google has one of the best examples proving this point—and it’s as funny as it is terrifying: Its test drivers once spotted a person driving a car while playing a trumpet. They’ve also seen people reading books and, of course, text messaging. “Lots of people aren’t paying attention to the road,” Chris Urmson, the head of Google’s Self-Driving Car Project, wrote in a blog post last year. “In any given daylight moment in America, there are 660,000 people behind the wheel who are checking their devices instead of watching the road.”
Google and Tesla both know this, but the two companies have dramatically different approaches to building autonomous vehicles. Tesla’s strategy is incremental. The idea is this: Add one sophisticated assistive-driving feature at a time, and eventually you’ll end up with a fully autonomous vehicle. (Its Autopilot feature, Tesla has emphasized repeatedly, requires a person to stay completely focused behind the wheel, even as the car does much of the driving.)
Google, on the other hand, is designing its vehicles for full-autonomy from the start—a “level 4” system, as it’s known in the driverless world—which involves the car doing all of the driving without any human intervention necessary.
“It’s not to say that either of them is right or wrong, it’s just different,” Urmson told me last fall. “From our perspective, we look at the challenges involved in getting to a self-driving car, and we don’t see it as an incremental task.”
Google didn’t always see it this way, though. It wasn’t until the company realized just how quickly people trust technology to work perfectly that it decided it had to build a car that can “shoulder the entire burden of driving,” as Urmson once put it.
“Our experience has been that when we’ve had people come and ride in our vehicles—even those who think this is smoke and mirrors, or who fundamentally don’t believe in the technology—after trying it out for as little as 10 or 15 minutes, they get it,” he told me. “And their attitudes change dramatically.”
That transformation is a good thing for Google and for the future of self-driving cars more broadly, in that it suggests even skeptics will readily accept them, eventually, Urmson says. But it also poses the danger that people trust technology too much. Tesla’s Autopilot is exactly the sort of feature that encourages this kind of dynamic—no matter how many times the company emphasizes it requires human attention, the fact that Autopilot can do so much on its own ends up sending a dangerous mixed message.
Tesla’s Autopilot feature is in beta mode, and the drivers who test it on public roads are required to acknowledge the risks involved. But a question remains about whether the risks posed by partially autonomous systems (and their human drivers) are, in fact, justifiable.
That’s a question that Tesla is confronting again now. How it ultimately answers that question may have a profound effect on the future of driving.