Get Ready: The Autonomous Drones Are Coming

More

And they're not very different from piloted drones. Why smart policy depends on grasping that.

2009_terminator_salvation_001-615.jpgWarner Bros.

What's more troubling than lethal drones? Lethal drones that can think for themselves.

Such machines are worth worrying about not because of the prospect we'll suffer some Terminator-style robot uprising, but because in the next few decades we'll need to make some extremely difficult choices about when it's okay for a computer to end a human life. The ongoing debate over whether pilot-controlled drones are a legitimate instrument of violence is contentious enough already without the added problem of artificial intelligence and remote accountability.

Nothing is inevitable, but over the next few decades, it'll be very hard to avoid the moment when autonomous drones make their way to the battlefield. So long as there exists a military incentive to limit risk and enhance lethality -- and so long as commanders control procurement -- the temptation to move toward autonomous drones will be irresistible. On logistical grounds alone, robots that can swarm and operate semi-independently of humans represent huge cost savings -- aside from the occasional firmware patch and other maintenance, robots don't need food, health-care, or retirement benefits. A sophisticated drone programmed to make the same choices as its human counterpart could also eliminate errors in accuracy brought on by weather, fear, and other distractions.

The idea that a machine could exhibit the moral and emotional complexity of a human seems laughable -- at least right now. Our relationship to computers as tools has conditioned us to view remote-piloted drones and fully autonomous drones as two distinct phenomena. But perhaps there's really less of a divide there than we think. What if this is simply the constructed product of our biases talking? In fact, both technically and ethically, fully autonomous drones may not be that different from the ones we use now.

* * *

Suppose that a thinking robot follows all the appropriate laws of war. Human Rights Watch's Tom Malinowski describes the problem this way:

The robot, in other words, might reach the same "legal" conclusion in such a scenario as a JAG officer. But let's remember: proportionality decisions require weighing the relative value of a human life. Would we be comfortable letting robots do that? How would we feel if someone we loved were killed in a military strike, and we were informed that a computer, rather than a person, decided that their loss was proportionate to the value of a military target?

Implicit in Malinowski's last question there is a premise: a family whose son was killed by an unsupervised drone ought to be more upset than if they'd found out he'd been killed by another human being.

On its face, this makes perfect sense. It wouldn't be fair any other way; it would've been a cheap shot, otherwise. Yet the history of combat is literally the tale of one cheap shot after another. From the bow to the longbow, the rifle to the sniper rifle, the hot-air balloon to the jet fighter, military tacticians have been obsessed with increasing the range of their weaponry. Killing more of the enemy from afar weakens his resolve while preserving your own resources.

That we've been on an everlasting quest for the ultimate ranged weapon doesn't necessarily make it the universal solution. One unfortunate consequence of the high-tech style of war pioneered by Donald Rumsfeld, the former U.S. defense secretary, was that troops operating off the battlefield couldn't relate face-to-face with Afghans and Iraqis, which was precisely the wrong strategy in what ultimately became a war for public opinion.

But in the same way that obviously unfair technologies like camouflage and submarines eventually became commonplace (the historian Michael L. Hadley writes that subs were initially regarded as "at best ungallant devices, at worst profoundly evil"), the transition to fully autonomous drones may be unstoppable. The tactical advantages are just too tempting. Not that it would matter to the hypothetical family whose son our drones just killed.

And this is precisely the point. The range at which this family's son was killed would probably matter less in the days following than that their son was killed at all, to say nothing of whether the weapons platform was manned, unmanned, or totally autonomous. Chances are, they wouldn't even know.

* * *

If it becomes impossible to tell whether a robot or a human dealt the killing blow, the boundary between the two begins to break down, and the stumbling block over drones turns from whether letting a computer take the shot is okay to why we feel uncomfortable doing so.

The answer has almost nothing to do with the technology itself.

People like to be in control. The problem with this attitude is twofold: First, we generally think we're more competent than we really are. This is why driving is so dangerous -- and why everyone else on the highway looks like a maniac to you.

Second, we already surrender much of our lives to self-governed systems. Traffic lights are largely automated. So are many aircraft maneuvers. Pacemakers, subways, escalators, elevators, and someday soon, self-driving cars. For people not already socialized to believe these are normal technologies, asking them to entrust their safety to such systems would be a tall order.

The fact that we're okay with non-lethal programs performing as expected but not with lethal programs performing as expected says more about our adaptiveness as a species than anything about autonomous drones per se (I find the thought of either program performing not-as-expected equally disturbing). 

Treating self-governed drones as a totally foreign and abhorrent concept makes it hard to plan ahead. These weapons are coming, whether we like them or not.

In time, I expect we'll grow accustomed to the idea of autonomous drones doing our killing for us, but we'll have a lot of say over how that process occurs. To say so isn't an endorsement of pre-programmed warfare -- after all, even today, you can be a drone proponent without supporting current policy -- but it does suggest that the gulf between human-driven killing and machine-driven killing isn't as wide as we might think.

Jump to comments
Presented by

Brian Fung is the technology writer at National Journal. He was previously an associate editor at The Atlantic and has written for Foreign Policy and The Washington Post.

Get Today's Top Stories in Your Inbox (preview)

Why Are Americans So Bad at Saving Money?

The US is particularly miserable at putting aside money for the future. Should we blame our paychecks or our psychology?


Elsewhere on the web

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

The Death of Film

You'll never hear the whirring sound of a projector again.

Video

How to Hunt With Poison Darts

A Borneo hunter explains one of his tribe's oldest customs: the art of the blowpipe

Video

A Delightful, Pixar-Inspired Cartoon

An action figure and his reluctant sidekick trek across a kitchen in search of treasure.

Video

I Am an Undocumented Immigrant

"I look like a typical young American."

Video

Why Did I Study Physics?

Using hand-drawn cartoons to explain an academic passion

Writers

Up
Down

More in Global

Just In