For as long as we’ve been able to make robots, we’ve been worried about them killing us.
In 1942, Isaac Asimov published a short story called Runaround that both coined the term "robotics" and introduced the idea of robots killing humans. Last week, one company set out to assure people that it, too, was worried about this potential threat.
Yes, the organized campaign against killer robots has gained momentum as the technology and militarization of robotics has advanced, and the smartest thing the movement has done is pick its name. “Killer robots” still isn’t a well-defined term, but it's clearly a winning one.
Autonomous robotic systems have indeed come a long way since Asimov. Far enough that, in 2012, Human Rights Watch issued a report making the case against lethal autonomous weapons systems—weapons that can make lethal decisions without human involvement. Except they didn’t call them “lethal autonomous weapons systems.” The title of the report was “Losing Humanity: The Case Against Killer Robots.”
Mary Wareham, coordinator of the Campaign to Stop Killer Robots, admits it was a bit much. “We put killer robots in the title of our report to be provocative and get attention,” she says. “It’s shameless campaigning and advocacy, but we’re trying to be really focused on what the real life problems are, and killer robots seemed to be a good way to begin the dialogue.”
Ryan Gariepy, the chief technology officer at Clearpath Robotics, echoed Wareham: “It is a little bit sensationalist, and the engineer side of me thinks it’s a little bit not specific. But if that’s what society needs to address this issue, then that’s the way we’ll talk about it.”
Naming weapons and missions like this isn’t new. The LGM-118A “Peacekeeper” was a missile that could carry up to 3,000 kilotons of warheads. Israel Aerospace Industries makes a missile named Gabriel, named for the angel. A 2006 Israeli mission to bomb South Lebanon was named Mivtza Sachar Holem, “Operation Just Reward.” When the United States invaded Iraq they called the program Operation Iraqi Freedom. Researcher Charles Kauffman argues that as our weapons get more and more powerful, our names for them get more and more demure, to soften the idea of the damage they could do. But if you’re in the market of making a weapon seem evil, “killer robot” is effective.
But not everybody defines “killer robot” the same way. For Clearpath, a killer robot is a robot that can make a decision to use lethal force without human intervention. At Human Rights Watch, the definition is broadened to include any robot that can choose to use force against a human, even if that force isn’t lethal.
Really, there's reluctance to pin down a single definition, Wareham says. “In fact there was a push away from that.” That's at least in part because different organizations and agencies have distinct—and sometimes conflicting—goals about what discussion of killer robots might yield.
So while Human Rights Watch is seeking potential rules against “killer robots” that could regulate specific weapons, classes of weapons, weapons systems, or entire attack strategies, groups like Clearpath have to consider their clients—including both the Canadian and U.S. militaries. Once Clearpath hands over robotics technology to those governments, Gariepy acknowledges, the company has no control over how that technology is used.
For now, this nebulous mass of robotic entities that could kill or harm humans has a name without a solid definition—but it’s a really smart name. After all, Wareham says, “no government wants to be seen as pro-killer-robot.”
This article available online at: