It's Come to This: Debating Death by Autopilot

Drones aren't going away. The controversial question today is whether they should ever be allowed to kill on their own.

Thumbnail image for hal full.png
A panel discussion at Georgetown University last week probed an alarming question: Should the U.S. ever be comfortable delegating battlefield decisions about who should live or die to autonomous weapons? Or should there always be a human "in the loop," as with today's drones? The technology for autonomous killing isn't yet advanced enough for use in combat. It is nevertheless telling that this is the debate that's happening in academic national security circles: The ubiquity of drones is treated as an inevitability, and the "Overton window" has shifted. Autonomous killing is now the application that seems at once plausible and too awful to permit.

Tom Malinowski of Human Rights Watch wants the U.S. to preemptively stigmatize autonomous killing. He favors international agreements that codify our military's current standard: A trained human being must always play a role in pulling the trigger. Malinowski asks:

Could a machine do something that human soldiers throughout the centuries have rarely done, but sometimes do to very important effect -- to refuse to follow orders? I'm convinced that, if these weapons are developed, they're not just going to be deployed by the United States and Sweden, they're going to be deployed by dictatorships. They're going to be deployed by countries that primarily see them as a way of controlling domestic unrest and domestic opposition. I imagine a future Bashar Assad with an army of fully autonomous weapons thirty years from now, fifty years from now. We've seen in history that one limit on the ability of unscrupulous leaders to do terrible things to their people and to others is that human soldiers, their human enforcers, have certain limits. There are moments when they say no. And those are moments when those regimes fall. Robotic soldiers would never say no. And I'd like us not to go there.

In contrast, Ben Wittes of the Brookings Institution argued against preempting the rise of autonomous killing machines.

"About 100 years ago, the world looked at the technological future and saw a menace. And it passed an international convention to preemptively ban an emerging class of technology that was so terrifying and so menacing that we simply had to get together and make sure that it would never be used in warfare," he began. "I'm talking, of course, about hot-air balloons. If you look at the Hague Resolutions of I believe 1897, there's this provision, the United States is signatory to it, called balloon warfare, and it bans it. And I mention this only to point out that the history of human certainty about the direction of technology is a very iffy proposition. We're not good at identifying where technology is going, how we should feel about it. By World War I, this was moot."

Humanity's inability to figure out the future was at the core of his argument.

"I don't want to sit here and make an affirmative case for fully autonomous weapons systems," he explained. "I've never argued for them. I've argued for agnosticism about them. I can imagine and I think we should keep in mind that we're all kind of bad at predicting where things are going. And therefore, we don't want to be the authors of the next balloon warfare convention."
He proceeded to make these points:

  • "People suck .... All the war crimes committed in the 20th century, and you're accounting for tens of millions of people, none of them was committed by anything other than that human judgment. We're not starting from a baseline in which human autonomous firing power should be considered a wonderful thing. If you look at all the Human Rights Watch reports about terrible things going on in the world, it's not robots they're complaining about, it's people. And so I don't start with the assumption that it isn't imaginable that we could improve on that performance."
  • We're particularly bad at predicting how autonomous technology advances. We keep thinking it will arrive more quickly than it does. And we aren't very good at knowing which human qualities we can replicate and improve upon. "It's perfectly plausible to me that over time horizon x," he states, "we'll emerge in a word in which, for some applications, robots can do things dramatically better than humans can, including some firing applications. For some applications they do significantly worse. And for some applications they do about the same."
  • "A blanket rule of any kind is very ill-advised. ... Imagine a situation 25 or 30 years from now where we're still caught in the problem where those who are fighting are not in uniform. Like in an Iraq insurgency situation, there's no application for autonomous firing. But boy, robots are good at figuring out whether someone is in a North Korean Army uniform. And in the DMZ, you know, there are actually civilians. So the question is really, is the person on the other side or not. And I can imagine a situation in which you had certain applications in which robots did dramatically better than people."
  • "The fundamental tenet of international law comes down to accuracy and discrimination, doing the best you can to make judgments based on the information available to you. You have to discriminate to the best degree you can. You have to engage in a proportionality analysis. So my very modest proposition is that you cannot say preemptively and you should not say preemptively that there will never come a time when those values will not be required, will not require you to use technology that will do a much better job than you can do now."
  • Autonomous firing isn't all similar. Just because we don't think there will ever be a robot capable of deciding who to kill in urban warfare on a Baghdad street doesn't mean that there won't be a weapon good enough to autonomously take out an enemy submarine, which is very different.

He concluded that "we don't need a preemptive rule. We have fundamental rules of distinction, proportionality, and humanity. We have no desire on the part of the military to deploy unmanned fully autonomous weapons systems at this point. And we have no immediate prospect of the development of such systems at a level that would make their deployment attractive. So we don't really have a problem. And I think it's worth a certain modesty imagining the future and not putting ourselves in the shoes where we see a balloon and confuse it with a Terminator."

* * *

Regular readers won't be surprised that I favor efforts to prevent autonomous weapons from becoming an international norm. Why not err on the side of caution? At worst, you've overreacted. What was so bad about the treaty that regulated how hot-air balloons would be used? Constraining their use in war turned out to be pointless, but it didn't hurt the world in any way. If that's the cautionary tale, preemptively banning some weapons in war doesn't sound so unwise.

Admittedly, a ban could cause us to miss out on the unexpected application that kills fewer people than human judgment. Why does that slim possibility seem to matter so much to Wittes? It's possible that the prohibition on establishing a state religion will prevent us from implementing the One True Way after a Revelation in 2035, robbing hundreds of millions of eternal salvation. The First Amendment is still a prudent rule.

Like state religions, autonomous weapons are unlikely to be deployed for the benefit of humanity. What Wittes ought to understand as well as anyone, if he believes that "people suck," is that humans and the nations they run generally deploy weapons to maximize their interests and advantages. As an autonomous weapon is being prepared, its programmers are likely to be designing something with a primary objective other than "go out and minimize civilian casualties!" Pondering futures that autonomous killing could bring, it seems clear that "insufficient stigma against new weapon" has far more dramatic downside potential than "too much stigma." 

Wittes suggests his agnosticism on autonomous drones is the most modest stance, as if it would leave all options open for the future. But the "wait and see" approach is itself constraining.

As Malinowski noted in his rebuttal, you can "wait until we and other countries develop these systems; incorporate them into planning; incorporate them into how we do warfare, see how it goes, see whether in fact they are better at distinction and proportionality than humans; or if they turn out to be quite bad at making the kind of very difficult, subtle judgments that I think requires a human being on the battlefield to make." But at that point, what are your options as a country? "My concern is that, if it turns out badly," he continues, "it will be too late to make an effective decision, because if these weapons are ubiquitous, if other governments have them and they provide them with an advantage on the battlefield -- and certainly they will at some point provide an advantage on the battlefield -- it becomes very very difficult to turn back the clock."

His alternative: Follow a principle whereby the United States always has a human "in the loop," try to persuade as many other countries as possible to commit to the same standard, and alter the agreement sometime in the distant future only if unexpected changes in technology really do surprise us. If changing course were ever a no-brainer, "the technology will exist," he concluded. "So it's a much safer approach to uncertainty, to start with a preemptive rule that says, 'We are going to maintain a human being in the role of decision-maker for uses of lethal force.'"

I think that's right.