A panel discussion at Georgetown University last week probed an alarming question: Should the U.S. ever be comfortable delegating battlefield decisions about who should live or die to autonomous weapons? Or should there always be a human "in the loop," as with today's drones? The technology for autonomous killing isn't yet advanced enough for use in combat. It is nevertheless telling that this is the debate that's happening in academic national security circles: The ubiquity of drones is treated as an inevitability, and the "Overton window" has shifted. Autonomous killing is now the application that seems at once plausible and too awful to permit.
Tom Malinowski of Human Rights Watch wants the U.S. to preemptively stigmatize autonomous killing. He favors international agreements that codify our military's current standard: A trained human being must always play a role in pulling the trigger. Malinowski asks:
Could a machine do something that human soldiers throughout the centuries have rarely done, but sometimes do to very important effect -- to refuse to follow orders? I'm convinced that, if these weapons are developed, they're not just going to be deployed by the United States and Sweden, they're going to be deployed by dictatorships. They're going to be deployed by countries that primarily see them as a way of controlling domestic unrest and domestic opposition. I imagine a future Bashar Assad with an army of fully autonomous weapons thirty years from now, fifty years from now. We've seen in history that one limit on the ability of unscrupulous leaders to do terrible things to their people and to others is that human soldiers, their human enforcers, have certain limits. There are moments when they say no. And those are moments when those regimes fall. Robotic soldiers would never say no. And I'd like us not to go there.
In contrast, Ben Wittes of the Brookings Institution argued against preempting the rise of autonomous killing machines.