Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.
For the time being, the legal and regulatory framework for these vehicles is slight. As Stanford law fellow Bryant Walker Smith has argued, automated cars are probably legal in the United States, but only because of a legal principle that “everything is permitted unless prohibited.” That’s to say, an act is allowed unless it’s explicitly banned, because we presume that individuals should have as much liberty as possible. Since, until recently, there were no laws concerning automated cars, it was probably not illegal for companies like Google to test their self-driving cars on public highways.
To illustrate this point by example, Smith turns to another vehicle: a time machine. “Imagine that someone invents a time machine," he writes. "Does she break the law by using that machine to travel to the past?” Given the legal principle nullum crimen sine lege, or “no crime without law,” she doesn’t directly break the law by the act of time-traveling itself, since no law today governs time-travel.
This is where ethics come in. When laws cannot guide us, we need to return to our moral compass or first principles in thinking about autonomous cars. Does ethics yield the same answer as law? That’s not so clear. If time-traveling alters history in such a way that causes some people to be harmed or never have been born, then ethics might find the act problematic.
This illustrates the potential break between ethics and law. Ideally, ethics, law, and policy would line up, but often they don’t in the real world. (Jaywalking and speeding are illegal, for examples, but they don’t seem to be always unethical, e.g., during a time when there’s no traffic or in case of an emergency. A policy, then, to always ticket or arrest jaywalkers and speeders would be legal but perhaps too harsh.)
But, because the legal framework for autonomous vehicles does not yet exist, we have the opportunity to build one that is informed by ethics. This will be the challenge in creating laws and policies that govern automated cars: We need to ensure they make moral sense. Programming a robot car to slavishly follow the law, for instance, might be foolish and dangerous. Better to proactively consider ethics now than defensively react after a public backlash in national news.
The Trolley Problem
Philosophers have been thinking about ethics for thousands of years, and we can apply that experience to robot cars. One classical dilemma, proposed by philosophers Philippa Foot and Judith Jarvis Thomson, is called the Trolley Problem: Imagine a runaway trolley (train) is about to run over and kill five people standing on the tracks. Watching the scene from the outside, you stand next to a switch that can shunt the train to a sidetrack, on which only one person stands. Should you throw the switch, killing the one person on the sidetrack (who otherwise would live if you did nothing), in order to save five others in harm’s way?