The problem is, focusing on outcomes risks blinding people to the virtues and vices of the robocar rollout. Just as Foot’s trolley-and-workers scenario is morally different from her judge-and-rioters example, so it is that autonomous outcomes with the same human costs might entail quite different moral, legal, and civic consequences.
Recently, an autonomous Uber in Tempe, Arizona, struck and killed 49-year-old Elaine Hertzberg, a pedestrian walking a bicycle across a road. After I wrote about the possible legal implications of the collision, some readers responded with utilitarian sneers. After all, 5,376 pedestrians were killed by cars in the United States in 2015, and news outlets don’t tend to cover each of those as if they are special cases. Soon enough, autonomous cars could reduce or eliminate pedestrian deaths. If you put this idea in trolley-problem terms, the tracks would represent time rather than space. One death is still a tragedy, but if it means making progress toward the prevention of thousands, then perhaps it is justified.
The problem is, that position assumes that Hertzberg’s death is identical to any of the unfortunate thousands killed by conventional vehicles. Statistically that might be true, but morally, it isn’t necessarily so.
In the future, if they operate effectively, autonomous cars (not to mention front-collision warning systems in traditional cars) are likely to prevent accidents like the Tempe collision with far greater success. Sensors and computers, which can respond to their surroundings better than people, are supposed to perform more effectively than human response and reason can. As details of the Uber collision have trickled in, some experts have concluded that the collision should have been avoided. Furthermore, Uber’s cars appear to have fallen short of a company goal of 13 miles of autonomous behavior per human intervention as of March, when the crash occurred. Meanwhile, Google’s sister company Waymo claims that its cars can go an average of 5,600 miles without needing a human to take the reins.
On Arizona’s roads today, then, the difference between a Waymo autonomous vehicle and an Uber one might be more important than the difference between a human-operated and a computer-operated vehicle. But in order to lure self-driving car research, testing, and employment to the state, Arizona Governor Doug Ducey allowed all such vehicles to share the roads without significant regulatory oversight.
None of these conditions are addressed by pondering a trolley-problem scenario. To ask if the Uber should have struck Hertzberg or swerved off the shoulder (putting the operator at risk to avoid the pedestrian collision) presumes that the Uber vehicle can see the pedestrian in the first place and respond accordingly. It assumes that that ability is reliable and guaranteed—the equivalent of a mechanical act like throwing a lever to switch a trolley’s tracks. This context, missing from the trolly-problem scenario, turned out to be the most important aspect of the outcome in Tempe, both in terms of consequences and morality.