The Ethics of Autonomous Cars

Sometimes good judgment can compel us to act illegally. Should a self-driving vehicle get to make that same decision?
A self-driving car roams the streets of Las Vegas (Reuters)

If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.

Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes? 

Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver.

But there are important differences between humans and machines that could warrant a stricter test. For one thing, we’re reasonably confident that human drivers can exercise judgment in a wide range of dynamic situations that don’t appear in a standard 40-minute driving test; we presume they can act ethically and wisely. Autonomous cars are new technologies and won’t have that track record for quite some time.

Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.

For the time being, the legal and regulatory framework for these vehicles is slight. As Stanford law fellow Bryant Walker Smith has argued, automated cars are probably legal in the United States, but only because of a legal principle that “everything is permitted unless prohibited.” That’s to say, an act is allowed unless it’s explicitly banned, because we presume that individuals should have as much liberty as possible. Since, until recently, there were no laws concerning automated cars, it was probably not illegal for companies like Google to test their self-driving cars on public highways.

To illustrate this point by example, Smith turns to another vehicle: a time machine. “Imagine that someone invents a time machine," he writes. "Does she break the law by using that machine to travel to the past?” Given the legal principle nullum crimen sine lege, or “no crime without law,” she doesn’t directly break the law by the act of time-traveling itself, since no law today governs time-travel. 

This is where ethics come in. When laws cannot guide us, we need to return to our moral compass or first principles in thinking about autonomous cars. Does ethics yield the same answer as law? That’s not so clear. If time-traveling alters history in such a way that causes some people to be harmed or never have been born, then ethics might find the act problematic.

This illustrates the potential break between ethics and lawIdeally, ethics, law, and policy would line up, but often they don’t in the real world. (Jaywalking and speeding are illegal, for examples, but they don’t seem to be always unethical, e.g., during a time when there’s no traffic or in case of an emergency. A policy, then, to always ticket or arrest jaywalkers and speeders would be legal but perhaps too harsh.) 

But, because the legal framework for autonomous vehicles does not yet exist, we have the opportunity to build one that is informed by ethics. This will be the challenge in creating laws and policies that govern automated cars: We need to ensure they make moral sense. Programming a robot car to slavishly follow the law, for instance, might be foolish and dangerous. Better to proactively consider ethics now than defensively react after a public backlash in national news.

The Trolley Problem
Philosophers have been thinking about ethics for thousands of years, and we can apply that experience to robot cars. One classical dilemma, proposed by philosophers Philippa Foot and Judith Jarvis Thomson, is called the Trolley Problem: Imagine a runaway trolley (train) is about to run over and kill five people standing on the tracks. Watching the scene from the outside, you stand next to a switch that can shunt the train to a sidetrack, on which only one person stands. Should you throw the switch, killing the one person on the sidetrack (who otherwise would live if you did nothing), in order to save five others in harm’s way?

A simple analysis would look only at the numbers: Of course it’s better that five persons should live than only one person, everything else being equal. But a more thoughtful response would consider other factors too, including whether there’s a moral distinction between killing and letting die: It seems worse to do something that causes someone to die (the one person on the sidetrack) than to allow someone to die (the five persons on the main track) as a result of events you did not initiate or had no responsibility for.

Presented by

Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a visiting associate professor at Stanford's School of Engineering; and an affiliate scholar at Stanford Law School. He is the lead editor of Robot Ethics and the co-author of What Is Nanotechnology and Why Does It Matter? and Enhanced Warfighters: Risk, Ethics, and Policy.

Google Street View, Transformed Into a Tiny Planet

A 360-degree tour of our world, made entirely from Google's panoramas

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

Google Street View, Transformed Into a Tiny Planet

A 360-degree tour of our world, made entirely from Google's panoramas

Video

The 86-Year-Old Farmer Who Won't Quit

A filmmaker returns to his hometown to profile the patriarch of a family farm

Video

Riding Unicycles in a Cave

"If you fall down and break your leg, there's no way out."

Video

Carrot: A Pitch-Perfect Satire of Tech

"It's not just a vegetable. It's what a vegetable should be."

Video

The Benefits of Living Alone on a Mountain

"You really have to love solitary time by yourself."

More in Technology

Just In