If a robot soldier commits a war crime, who is held accountable?
You can't punish a collection of parts and coding algorithms. But can you blame a human commander, who gave a legal order only to see the robot carry it out incorrectly? And what about the defense manufacturers, which are often immune from the kind of lawsuits that would plague civilian outfits if their products cost lives.
The culpability question is one of a host of thorny moral dilemmas presented by lethal robots. On the one hand, if effective, robot soldiers could replace ground troops and prevent thousands of American casualties. And robots aren't susceptible to many of the weaknesses that plague humans: exhaustion, sickness, infection, emotion, indecision.
But even if robot warriors can keep American lives out of danger, can they be trusted with the complicated combat decisions now left to human judgment?
Rep. Jim McGovern thinks not.
The Massachusetts Democrat is part of a crusade for an international ban on killer robots — machines that can decide without human input whom to target and when to use force.
The only way to stop killer robots, said McGovern and a series of panelists he assembled for a Capitol Hill briefing this week, is to ban them before they even exist. Much like drones, once someone gets a killer robot, it's only a matter of time before everyone else is racing to catch up. And despite some countries' commitment to evaluating the technology responsibly, good intentions never won an arms race.