Pain Rays and Robot Swarms: The Radical New War Games the DOD Plays

An insider's look at why ethics, policy, and law matter to current and future warfare
More
wargames.jpg Reuters/Alexis C. Madrigal

In the year 2025, a rogue state--long suspected of developing biological weapons--now seems intent on using them against U.S. allies and interests. Anticipating such an event, we have developed a secret "counter-virus" that could infect and destroy their stockpile of bioweapons. Should we use it?

From a legal standpoint, it seems pretty straightforward that use of bioweapons by either party would violate the Biological Weapons Convention (BWC) and therefore be prohibited. From a policy standpoint, our first use of a bioweapon sets a most dangerous precedent for other states to justify their use of bioweapons--the very thing we want to prevent. But from an ethics standpoint, leaving the BWC aside, perhaps we are merely "practicing medicine" with a counter-virus--a vaccination of sorts--and surely this seems to be permissible.

A proper analysis, of course, goes much further. The lawyer also may point out that we've already violated the BWC by developing a counter-virus for war. The ethicist may argue that other options seem worse: A Stuxnet-like cyberattack may accidentally knock out containment systems and release the bioweapons, and incendiary bombs would kill both pathogens and people, potentially triggering an open war. The policy advisor, on the other hand, may prefer a more devastating, disproportionate attack to send a message to the world: We will not tolerate bio-threats, so don't even try it. But would this policy fuel animosity and thus be counterproductive to peace?

A new kind of wargaming

These are part of actual discussions last month at an unusual wargame sponsored by the U.S. Department of Defense's Rapid Reaction Technology Office and the U.S. Naval Academy. Hosted by the consultancy Noetic Group and directed by Dr. Peter W. Singer--author of Wired for War, a book partially responsible for raising global awareness about the "drone war" and its controversies--the event was part of the NeXTech wargaming series focused on emerging and future military technologies.

The technologies of interest are potential "game-changers": biotechnologies (e.g., human enhancements), energy (e.g., lasers and superefficient batteries), materials (e.g., 3D printing), hardware (e.g., robots), and software (e.g., electromagnetic and cyberweapons). But this particular wargame was dedicated to their ethics, policy, and legal issues, helping to identify friction points as well as to test how they can be integrated better in national-security planning and military-technology development.

As an invited participant to that and previous wargames, I will describe some of the scenarios we considered and their issues, which also gives a glimpse of the work that's still left to do. First, because the event was held under the Chatham House Rule, I won't attribute statements to any particular person. I will also simplify these scenarios and discussions, including substantial disagreements, for the sake of exposition.

To begin with, let's quickly define ethics, policy, and law, since this essay is about their interplay and surprising differences. Many decision-makers in both business and government treat ethics, policy, and law as if they were the same animal; and even if understood to be different, they are usually seen as walking lockstep with each other. But philosophers have long appreciated this distinction: what is legal isn't always ethical (such as the death penalty, some would say), and what is unethical isn't always illegal (such as adultery). Likewise, effective policy might be illegal or unethical (such as mutually assured destruction), and well-intentioned law or ethics may drive bad policies (such as a war on drugs, again some would say).

Contrary to popular opinion, ethics is more than "gut reactions" or intuitions, but it's about drawing out and applying broader principles that ought to guide our actions, such as maximizing happiness, respecting autonomy, doing no harm, or treating others as you'd want to be treated. Policy, in contrast, often takes a more pragmatic or realist approach, giving much weight to broader effects with an eye toward achieving certain goals; and so policy could diverge from ethics. And law is about complying with rules established and enforced by governments, under penalty of punishment. (As incomplete as these definitions are, they should be enough for our purposes here.)

In the following scenarios of future warfare, circa year 2025, I will tease out tensions among the three areas.

Scenario: Pain rays

armymil20070126093954570.jpg U.S. Army

Imagine if you were a U.S. soldier posted at a road checkpoint in a foreign land. You order an approaching car to halt, but the driver continues toward you. Maybe he doesn't understand you, or maybe he's a suicide-bomber. Either way, when he fails to comply one more time, you are left with little choice but to aim your rifle and shoot the driver. A "pain-ray"--or a microwave-like energy beam that causes pain so unbearable that any person would run away from it--would have been handy in this situation and others, such as crowd control. With that technology, we wouldn't need to escalate so quickly to deadly force, and that's good for us and the target.

This future is already here. We've had the pain-ray for decades but never used it in a conflict. Why? The answer reveals how ethics, policy, and law are critical to decisions to field a new technology or not, and why the Department of Defense is focusing more intently on these areas.

From a legal perspective, to use the pain-ray--as the U.S. military's Active Denial System has been called--against a mob that may contain both true combatants and merely angry protestors is likely illegal in war. A bedrock principle in the laws of armed conflict (LOAC), which is related to international humanitarian law and includes the Geneva and Hague Conventions, is the principle of distinction: Warring parties are never permitted to intentionally target noncombatants, even with nonlethal weapons and for the target's own good. End of story.

From an ethics perspective, it gets complicated. If truly safe, the benefits of a pain-ray are highly desirable: We'd have a nonlethal option between shouting and shooting, which would be better for foreign relations. And certainly it's better to cause temporary pain than to mortally wound. At the same time, observing established laws and norms is important. This suggests that we might want to clarify or reconsider the principle of distinction, if we think such nonlethal weapons ought to be allowed, all things considered.

From a policy perspective, how adversaries perceive the weapon also matters. If the pain-ray is seen as inhumane, then it could escalate, not defuse, a situation. It could make an agitated person even angrier, as inflicting pain often does. Importantly, the Active Denial System fell victim to bad public relations: Media sources reported a range of possible and invented harms, from eye damage and other burns to death and disfigurement, such as shrinking a body to half its size. Adversaries decried the weapon as "cooking" its targets alive. Critics worried that it could be abused, such as forcing enemies out of a bunker in order to shoot them, or for torture.

Currently, the Active Denial System is still sidelined, despite more than $100 million in development costs--tremendous costs and efforts we perhaps could have saved if we had engaged these and other issues earlier, as many in the defense community are coming to understand.

Scenario: Swarm robots

Back to the opening scenario above, let's consider another option besides the counter-virus. Suppose that we want more evidence before we launch an attack: We want confirmation that the rogue nation really is stockpiling bioweapons and has hostile intentions. We have developed autonomous microsystems--stealthy robot bugs--that can undertake intelligence gathering, such as capturing video and technical information. Further, the robots can conduct "swarming sabotage" if needed, targeting no personnel but eating away at key production materials, like a plague of locusts. Should we deploy these micro robots?

Jump to comments
Presented by

Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a visiting associate professor at Stanford's School of Engineering; and an affiliate scholar at Stanford Law School. He is the lead editor of Robot Ethics and the co-author of What Is Nanotechnology and Why Does It Matter? and Enhanced Warfighters: Risk, Ethics, and Policy.

Get Today's Top Stories in Your Inbox (preview)

Why Do People Love Times Square?

A filmmaker asks New Yorkers and tourists about the allure of Broadway's iconic plaza


Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

Why Do People Love Times Square?

A filmmaker asks New Yorkers and tourists about the allure of Broadway's iconic plaza

Video

A Time-Lapse of Alaska's Northern Lights

The beauty of aurora borealis, as seen from America's last frontier

Video

What Do You Wish You Learned in College?

Ivy League academics reveal their undergrad regrets

Video

Famous Movies, Reimagined

From Apocalypse Now to The Lord of the Rings, this clever video puts a new spin on Hollywood's greatest hits.

Video

What Is a City?

Cities are like nothing else on Earth.

Writers

Up
Down

More in Technology

Just In