If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.
Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes?
Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver.
But there are important differences between humans and machines that could warrant a stricter test. For one thing, we’re reasonably confident that human drivers can exercise judgment in a wide range of dynamic situations that don’t appear in a standard 40-minute driving test; we presume they can act ethically and wisely. Autonomous cars are new technologies and won’t have that track record for quite some time.
Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.
For the time being, the legal and regulatory framework for these vehicles is slight. As Stanford law fellow Bryant Walker Smith has argued, automated cars are probably legal in the United States, but only because of a legal principle that “everything is permitted unless prohibited.” That’s to say, an act is allowed unless it’s explicitly banned, because we presume that individuals should have as much liberty as possible. Since, until recently, there were no laws concerning automated cars, it was probably not illegal for companies like Google to test their self-driving cars on public highways.
To illustrate this point by example, Smith turns to another vehicle: a time machine. “Imagine that someone invents a time machine," he writes. "Does she break the law by using that machine to travel to the past?” Given the legal principle nullum crimen sine lege, or “no crime without law,” she doesn’t directly break the law by the act of time-traveling itself, since no law today governs time-travel.
This is where ethics come in. When laws cannot guide us, we need to return to our moral compass or first principles in thinking about autonomous cars. Does ethics yield the same answer as law? That’s not so clear. If time-traveling alters history in such a way that causes some people to be harmed or never have been born, then ethics might find the act problematic.
This illustrates the potential break between ethics and law. Ideally, ethics, law, and policy would line up, but often they don’t in the real world. (Jaywalking and speeding are illegal, for examples, but they don’t seem to be always unethical, e.g., during a time when there’s no traffic or in case of an emergency. A policy, then, to always ticket or arrest jaywalkers and speeders would be legal but perhaps too harsh.)
But, because the legal framework for autonomous vehicles does not yet exist, we have the opportunity to build one that is informed by ethics. This will be the challenge in creating laws and policies that govern automated cars: We need to ensure they make moral sense. Programming a robot car to slavishly follow the law, for instance, might be foolish and dangerous. Better to proactively consider ethics now than defensively react after a public backlash in national news.
The Trolley Problem
Philosophers have been thinking about ethics for thousands of years, and we can apply that experience to robot cars. One classical dilemma, proposed by philosophers Philippa Foot and Judith Jarvis Thomson, is called the Trolley Problem: Imagine a runaway trolley (train) is about to run over and kill five people standing on the tracks. Watching the scene from the outside, you stand next to a switch that can shunt the train to a sidetrack, on which only one person stands. Should you throw the switch, killing the one person on the sidetrack (who otherwise would live if you did nothing), in order to save five others in harm’s way?
A simple analysis would look only at the numbers: Of course it’s better that five persons should live than only one person, everything else being equal. But a more thoughtful response would consider other factors too, including whether there’s a moral distinction between killing and letting die: It seems worse to do something that causes someone to die (the one person on the sidetrack) than to allow someone to die (the five persons on the main track) as a result of events you did not initiate or had no responsibility for.
To hammer home the point that numbers alone don’t tell the whole story, consider a common variation of the problem: Imagine that you’re again watching a runaway train about to run over five people. But you could push or drop a very large gentleman onto the tracks, whose body would derail the train in the ensuing collision, thus saving the five people farther down the track. Would you still kill one person to save five?
If your conscience starts to bother you here, it may be that you recognize a moral distinction between intending someone’s death and merely foreseeing it. In the first scenario, you don’t intend for the lone person on the sidetrack to die; in fact, you hope that he escapes in time. But in the second scenario, you do intend for the large gentleman to die; you need him to be struck by the train in order for your plan to work. And intending death seems worse than just foreseeing it.
This dilemma isn’t just a theoretical problem. Driverless trains today operate in many cities worldwide, including London, Paris, Tokyo, San Francisco, Chicago, New York City, and dozens more. As situational awareness improves with more advanced sensors, networking, and other technologies, a robot train might someday need to make such a decision.
Autonomous cars may face similar no-win scenarios too, and we would hope their operating programs would choose the lesser evil. But it would be an unreasonable act of faith to think that programming issues will sort themselves out without a deliberate discussion about ethics, such as which choices are better or worse than others. Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child? We don’t like thinking about these uncomfortable and difficult choices, but programmers may have to do exactly that. Again, ethics by numbers alone seems naïve and incomplete; rights, duties, conflicting values, and other factors often come into play.