Pain Rays and Robot Swarms: The Radical New War Games the DOD Plays

From an ethics perspective, it gets complicated. If truly safe, the benefits of a pain-ray are highly desirable: We'd have a nonlethal option between shouting and shooting, which would be better for foreign relations. And certainly it's better to cause temporary pain than to mortally wound. At the same time, observing established laws and norms is important. This suggests that we might want to clarify or reconsider the principle of distinction, if we think such nonlethal weapons ought to be allowed, all things considered.

From a policy perspective, how adversaries perceive the weapon also matters. If the pain-ray is seen as inhumane, then it could escalate, not defuse, a situation. It could make an agitated person even angrier, as inflicting pain often does. Importantly, the Active Denial System fell victim to bad public relations: Media sources reported a range of possible and invented harms, from eye damage and other burns to death and disfigurement, such as shrinking a body to half its size. Adversaries decried the weapon as "cooking" its targets alive. Critics worried that it could be abused, such as forcing enemies out of a bunker in order to shoot them, or for torture.

Currently, the Active Denial System is still sidelined, despite more than $100 million in development costs--tremendous costs and efforts we perhaps could have saved if we had engaged these and other issues earlier, as many in the defense community are coming to understand.

Scenario: Swarm robots

Back to the opening scenario above, let's consider another option besides the counter-virus. Suppose that we want more evidence before we launch an attack: We want confirmation that the rogue nation really is stockpiling bioweapons and has hostile intentions. We have developed autonomous microsystems--stealthy robot bugs--that can undertake intelligence gathering, such as capturing video and technical information. Further, the robots can conduct "swarming sabotage" if needed, targeting no personnel but eating away at key production materials, like a plague of locusts. Should we deploy these micro robots?

From a legal perspective, it matters whether we're initiating a military action or an intelligence mission. If it's a military action (conducted under Title 10 authority of the United States Code), our robot bugs may be viewed as an attack and therefore provoke an aggressive response, perhaps triggering the biowarfare that we sought to avoid. But if it's an intelligence operation (conducted under Title 50 authority), we could better avoid escalating the crisis, as espionage usually isn't met with military force--it's just part of the games that nations play. (This issue is presently a concern for U.S. cyber-operations; for instance, is our hacking a use of force, or is it merely spying?) If the bugs are set to swarm mode, the attack must be attributable to us. This is required by the laws of armed conflict, in part so that an innocent third-party isn't blamed and subject to counterattack. But we would resist admitting our involvement if possible, since a clandestine strike (even if illegal) protects us against retaliation.

From an ethics perspective, we may be worried about the reliability of the robots. Will they work as advertised, that is, eat only production materials and not harm people? Who would be responsible if a robot bug malfunctions, runs amok, and attacks a person, say, an innocent child? If it's a programming error, perhaps we'd hold the manufacturer responsible; or if the environment was ill-suited for these robots in the first place and led to an accident, then maybe the commanding officer would be blamed--or even the president himself as commander-in-chief. But what if the malfunction was unforeseeable, such as a damaged computer chip from being swatted or shot at? We could stipulate that the commanding officer is still responsible, but this doesn't seem quite fair. The responsibility chain thus needs to be much clearer.

From a policy perspective, we could be setting a precedent that opens ourselves to both spy- and sabotage-robot invasions, in addition to unattributed stealth attacks. Still, this may be better than openly attacking with incendiary bombs, a clear use of force that is more easily attributed to us and which virtually guarantees retaliation.

Scenario: Biomarkers

090910091335-large.jpgLaser transmission of drugs into cells (Science News)

Staying with the bioweapons scenario, suppose we decide to gather more information before conducting any attack, without resorting to our robot bugs. We've also developed biological markers that can be used to tag, track, and locate the key individuals involved with running the rogue nation's bioweapons program. Injected into an unsuspecting person from a distance by laser beams, these biomarkers communicate with satellites and can be used for information operations, intelligence activity, as well as direct action when it comes time for a strike. Should we tag those individuals with biomarkers?

From a legal standpoint, this option seems to avoid earlier problems with the Biological Weapons Convention, as the biomarkers are not weapons themselves. But we may run into the distinction problem again, as we had with pain-rays: The individuals we tag (i.e., shoot with biomarkers) might not all be combatants. Some may be innocent truck drivers, for instance, who are critical links to the production process and can lead us to key locations; they may be unaware that they're even transporting materials for bioweapons. We must distinguish combatants from noncombatants in an attack, but must we do so in a biotagging process? While we may be intentionally aiming at noncombatant truck drivers, again our projectile does not seem to be a weapon at all, but an unobtrusive tracking device. It's unclear whether this makes a difference for the principle of distinction or BWC. On the other hand, if this is an intelligence operation under Title 50 authority, and not a military operation, then LOAC does not come into play.

Presented by

Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a visiting associate professor at Stanford's School of Engineering; and an affiliate scholar at Stanford Law School. He is the lead editor of Robot Ethics and the co-author of What Is Nanotechnology and Why Does It Matter? and Enhanced Warfighters: Risk, Ethics, and Policy.

How to Cook Spaghetti Squash (and Why)

Cooking for yourself is one of the surest ways to eat well. Bestselling author Mark Bittman teaches James Hamblin the recipe that everyone is Googling.

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

How to Cook Spaghetti Squash (and Why)

Cooking for yourself is one of the surest ways to eat well.

Video

Before Tinder, a Tree

Looking for your soulmate? Write a letter to the "Bridegroom's Oak" in Germany.

Video

The Health Benefits of Going Outside

People spend too much time indoors. One solution: ecotherapy.

Video

Where High Tech Meets the 1950s

Why did Green Bank, West Virginia, ban wireless signals? For science.

Video

Yes, Quidditch Is Real

How J.K. Rowling's magical sport spread from Hogwarts to college campuses

Video

Would You Live in a Treehouse?

A treehouse can be an ideal office space, vacation rental, and way of reconnecting with your youth.

More in Technology

Just In