It's Come to This: Debating Death by Autopilot

Drones aren't going away. The controversial question today is whether they should ever be allowed to kill on their own.
More
Thumbnail image for hal full.png

A panel discussion at Georgetown University last week probed an alarming question: Should the U.S. ever be comfortable delegating battlefield decisions about who should live or die to autonomous weapons? Or should there always be a human "in the loop," as with today's drones? The technology for autonomous killing isn't yet advanced enough for use in combat. It is nevertheless telling that this is the debate that's happening in academic national security circles: The ubiquity of drones is treated as an inevitability, and the "Overton window" has shifted. Autonomous killing is now the application that seems at once plausible and too awful to permit.
 

Tom Malinowski of Human Rights Watch wants the U.S. to preemptively stigmatize autonomous killing. He favors international agreements that codify our military's current standard: A trained human being must always play a role in pulling the trigger. Malinowski asks:

Could a machine do something that human soldiers throughout the centuries have rarely done, but sometimes do to very important effect -- to refuse to follow orders? I'm convinced that, if these weapons are developed, they're not just going to be deployed by the United States and Sweden, they're going to be deployed by dictatorships. They're going to be deployed by countries that primarily see them as a way of controlling domestic unrest and domestic opposition. I imagine a future Bashar Assad with an army of fully autonomous weapons thirty years from now, fifty years from now. We've seen in history that one limit on the ability of unscrupulous leaders to do terrible things to their people and to others is that human soldiers, their human enforcers, have certain limits. There are moments when they say no. And those are moments when those regimes fall. Robotic soldiers would never say no. And I'd like us not to go there.

In contrast, Ben Wittes of the Brookings Institution argued against preempting the rise of autonomous killing machines. 

"About 100 years ago, the world looked at the technological future and saw a menace. And it passed an international convention to preemptively ban an emerging class of technology that was so terrifying and so menacing that we simply had to get together and make sure that it would never be used in warfare," he began. "I'm talking, of course, about hot-air balloons. If you look at the Hague Resolutions of I believe 1897, there's this provision, the United States is signatory to it, called balloon warfare, and it bans it. And I mention this only to point out that the history of human certainty about the direction of technology is a very iffy proposition. We're not good at identifying where technology is going, how we should feel about it. By World War I, this was moot."

Humanity's inability to figure out the future was at the core of his argument.

"I don't want to sit here and make an affirmative case for fully autonomous weapons systems," he explained. "I've never argued for them. I've argued for agnosticism about them. I can imagine and I think we should keep in mind that we're all kind of bad at predicting where things are going. And therefore, we don't want to be the authors of the next balloon warfare convention."
 
He proceeded to make these points:

  • "People suck .... All the war crimes committed in the 20th century, and you're accounting for tens of millions of people, none of them was committed by anything other than that human judgment. We're not starting from a baseline in which human autonomous firing power should be considered a wonderful thing. If you look at all the Human Rights Watch reports about terrible things going on in the world, it's not robots they're complaining about, it's people. And so I don't start with the assumption that it isn't imaginable that we could improve on that performance."
  • We're particularly bad at predicting how autonomous technology advances. We keep thinking it will arrive more quickly than it does. And we aren't very good at knowing which human qualities we can replicate and improve upon. "It's perfectly plausible to me that over time horizon x," he states, "we'll emerge in a word in which, for some applications, robots can do things dramatically better than humans can, including some firing applications. For some applications they do significantly worse. And for some applications they do about the same."
  • "A blanket rule of any kind is very ill-advised. ... Imagine a situation 25 or 30 years from now where we're still caught in the problem where those who are fighting are not in uniform. Like in an Iraq insurgency situation, there's no application for autonomous firing. But boy, robots are good at figuring out whether someone is in a North Korean Army uniform. And in the DMZ, you know, there are actually civilians. So the question is really, is the person on the other side or not. And I can imagine a situation in which you had certain applications in which robots did dramatically better than people."
  • "The fundamental tenet of international law comes down to accuracy and discrimination, doing the best you can to make judgments based on the information available to you. You have to discriminate to the best degree you can. You have to engage in a proportionality analysis. So my very modest proposition is that you cannot say preemptively and you should not say preemptively that there will never come a time when those values will not be required, will not require you to use technology that will do a much better job than you can do now."
  • Autonomous firing isn't all similar. Just because we don't think there will ever be a robot capable of deciding who to kill in urban warfare on a Baghdad street doesn't mean that there won't be a weapon good enough to autonomously take out an enemy submarine, which is very different.

He concluded that "we don't need a preemptive rule. We have fundamental rules of distinction, proportionality, and humanity. We have no desire on the part of the military to deploy unmanned fully autonomous weapons systems at this point. And we have no immediate prospect of the development of such systems at a level that would make their deployment attractive. So we don't really have a problem. And I think it's worth a certain modesty imagining the future and not putting ourselves in the shoes where we see a balloon and confuse it with a Terminator."

Jump to comments
Presented by

Conor Friedersdorf is a staff writer at The Atlantic, where he focuses on politics and national affairs. He lives in Venice, California, and is the founding editor of The Best of Journalism, a newsletter devoted to exceptional nonfiction.

Get Today's Top Stories in Your Inbox (preview)

What Is the Greatest Story Ever Told?

A panel of storytellers share their favorite tales, from the Bible to Charlotte's Web.


Elsewhere on the web

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

The Death of Film

You'll never hear the whirring sound of a projector again.

Video

How to Hunt With Poison Darts

A Borneo hunter explains one of his tribe's oldest customs: the art of the blowpipe

Video

A Delightful, Pixar-Inspired Cartoon

An action figure and his reluctant sidekick trek across a kitchen in search of treasure.

Video

I Am an Undocumented Immigrant

"I look like a typical young American."

Video

Why Did I Study Physics?

Using hand-drawn cartoons to explain an academic passion

Writers

Up
Down

More in Politics

Just In