Google is more than doubling its fleet of self-driving vehicles this year. But instead of adding more of its own cute bubble-shaped vehicles, or another batch of Audis, Lexus SUVs, or Toyotas like those it currently uses to test its technology, Google is working with Chrysler to build 100 driverless minivans.
In one respect, this is straight out of the so-not-flashy-it’s-actually-flashy Silicon Valley playbook. (See also: Black turtlenecks.) But it’s actually a brilliant move on the part of Google. (And Chrysler, for that matter, but that’s another story.)
For one thing, self-driving cars, when they become available for purchase, are likely to crop up first in certain kinds of environments, like small cities or large corporate campuses. A vehicle that seats eight will be attractive for businesses and institutions that might want to snap up mini-fleets of driverless cars for ridesharing.
But even more than that, picking a minivan for Google’s first direct collaboration with an automaker is really about what the Chrysler Pacifica—and minivans more broadly—represents. Boring ole safety and reliability. This is the kind of car that gets a tumble of kids to and from field hockey practice without incident; it is not a vehicle you expect to see speeding down the freeway, weaving in and out of traffic.
In other words, Google’s interest in self-driving minivans has to do with building public trust, a hurdle facing the developers of driverless cars that’s arguably even greater than the remaining technological challenges.
Getting the public to trust any new technology is difficult. Leaders in the self-driving car space like to evoke the early days of the elevator as a way to explain how it takes people time to warm up to technology that will eventually become ubiquitous. “This magic thing that would whisk you up floors. You couldn’t possibly imagine relinquishing your life to this thing,” Chris Urmson, the head of Google’s self-driving cars project, told me in an interview last year.
The elevator analogy is a useful one, but it misses one of the major aspects of self-driving car technology that fuels uneasiness: How do these things make decisions?
“The vast majority of machine learning techniques we use are uninterpretable to people,” said Julie Shah, a roboticist at MIT who specializes in human-machine collaboration. “You have no insight into why or how it’s operating this way.”
The mystery of how an algorithm takes input and generates output might be acceptable to people when they’re browsing, say, personalized Netflix recommendations; but the leap of faith required to trust that a self-driving car is making the right choices is still too great for many. (In February, a survey by the polling firm Morning Consult found most people—51 percent—wouldn’t ride in a self-driving car; with 43 percent of respondents calling the technology “unsafe,” compared with 32 percent who said it was “safe.”)
“It limits how far we’ll be willing to use artificial intelligence,” Shah told me. “I see it as one of the fundamental barriers to AI supporting large portions of the work we do.”
There’s also something of a double-standard that self-driving cars will have to face. Humans expect some level of inscrutability in their interactions with other humans, for instance, but machines are afforded very little room for error.
“The fear doesn’t necessarily help push it forward,” said Carol Reiley, a roboticist and the cofounder of Drive.ai, which recently obtained a permit to test its self-driving vehicles on California’s roads. “But a program ultimately just does what you tell it to—or takes the data going in, and figures out how to generate the output you want.”
Except, of course, when the output isn’t what you want. “The encroachment of technological complication through increased computerization has affected every aspect of our lives ...” wrote Samuel Arbesman in an essay for Aeon in 2014. “The nightmare scenario is not Skynet—a self-aware network declaring war on humanity—but messy systems so convoluted that nearly any glitch you can think of can happen. And they actually happen far more often than we would like.”
So far, however, Google’s self-driving cars have been astonishingly reliable. That’s in part because of how carefully they’ve been tested, but also because the technology seems to work. Its driverless cars have caused only one (minor) accident over the course of six years and more than 1.4 million miles of autonomous driving. Yet humans are notoriously bad at discerning actual risk from perceived threats to safety.
Which may be why a minivan—one that uses the same self-driving sensors and algorithms that already power Google’s other vehicles—might somehow feel safer to members of the public. And that could make all the difference.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.