Pain Rays and Robot Swarms: The Radical New War Games the DOD Plays

From a legal perspective, it matters whether we're initiating a military action or an intelligence mission. If it's a military action (conducted under Title 10 authority of the United States Code), our robot bugs may be viewed as an attack and therefore provoke an aggressive response, perhaps triggering the biowarfare that we sought to avoid. But if it's an intelligence operation (conducted under Title 50 authority), we could better avoid escalating the crisis, as espionage usually isn't met with military force--it's just part of the games that nations play. (This issue is presently a concern for U.S. cyber-operations; for instance, is our hacking a use of force, or is it merely spying?) If the bugs are set to swarm mode, the attack must be attributable to us. This is required by the laws of armed conflict, in part so that an innocent third-party isn't blamed and subject to counterattack. But we would resist admitting our involvement if possible, since a clandestine strike (even if illegal) protects us against retaliation.

From an ethics perspective, we may be worried about the reliability of the robots. Will they work as advertised, that is, eat only production materials and not harm people? Who would be responsible if a robot bug malfunctions, runs amok, and attacks a person, say, an innocent child? If it's a programming error, perhaps we'd hold the manufacturer responsible; or if the environment was ill-suited for these robots in the first place and led to an accident, then maybe the commanding officer would be blamed--or even the president himself as commander-in-chief. But what if the malfunction was unforeseeable, such as a damaged computer chip from being swatted or shot at? We could stipulate that the commanding officer is still responsible, but this doesn't seem quite fair. The responsibility chain thus needs to be much clearer.

From a policy perspective, we could be setting a precedent that opens ourselves to both spy- and sabotage-robot invasions, in addition to unattributed stealth attacks. Still, this may be better than openly attacking with incendiary bombs, a clear use of force that is more easily attributed to us and which virtually guarantees retaliation.

Scenario: Biomarkers

090910091335-large.jpgLaser transmission of drugs into cells (Science News)

Staying with the bioweapons scenario, suppose we decide to gather more information before conducting any attack, without resorting to our robot bugs. We've also developed biological markers that can be used to tag, track, and locate the key individuals involved with running the rogue nation's bioweapons program. Injected into an unsuspecting person from a distance by laser beams, these biomarkers communicate with satellites and can be used for information operations, intelligence activity, as well as direct action when it comes time for a strike. Should we tag those individuals with biomarkers?

From a legal standpoint, this option seems to avoid earlier problems with the Biological Weapons Convention, as the biomarkers are not weapons themselves. But we may run into the distinction problem again, as we had with pain-rays: The individuals we tag (i.e., shoot with biomarkers) might not all be combatants. Some may be innocent truck drivers, for instance, who are critical links to the production process and can lead us to key locations; they may be unaware that they're even transporting materials for bioweapons. We must distinguish combatants from noncombatants in an attack, but must we do so in a biotagging process? While we may be intentionally aiming at noncombatant truck drivers, again our projectile does not seem to be a weapon at all, but an unobtrusive tracking device. It's unclear whether this makes a difference for the principle of distinction or BWC. On the other hand, if this is an intelligence operation under Title 50 authority, and not a military operation, then LOAC does not come into play.

From an ethics standpoint, this option might not seem much different from other intelligence, surveillance, and reconnaissance (ISR) operations that are permissible: we're just following some people around. Does it really matter whether we're aiming telescopes at them or biomarker laser-rifles, if neither does any injury? If any of the targets are U.S. citizens, however, then domestic U.S. privacy law and ethics may apply where it involves collected data. The ethicist also would be concerned about the risk profile of the biomarker on human health, as well as the risk of accidentally shooting into a target's eye or other untested areas.

From a policy standpoint, adversaries may react badly, comparing our operation to the tagging of animals. Our treatment of their people, they might say, is inhumane or disrespectful. And this would ignite resentment and help recruitment of more sympathizers to their cause.

Scenario: Soldier enhancements

linart2-thumb-615x439-109908.jpg Alexis C. Madrigal

In the same scenario, suppose we have now gathered enough evidence to be confident that the rogue nation indeed plans to threaten us with a bioattack. The bioweapons program, however, resides deep underground on a mountainside. As an alternative to a tactical nuclear strike, we have developed a vaccine against the pathogen and inoculated a special operations unit with it. Further, this unit has been physically and cognitively enhanced--able to easily stay awake for days and twice as strong as a normal soldier--in order to traverse the difficult terrain, infiltrate the underground facility, and take down the bioweapons program with a reasonable probability of success. Should we deploy the enhanced unit?

From a legal perspective, we again seem to avoid earlier problems with the BWC, since human enhancements are not weapons, even if they are biologically based technologies. For instance, the BWC isn't concerned with regulating vaccines, anabolic steroids, or "smart drugs." But sending in a combat unit to destroy the bioweapons program clearly would be a use of force, and this is an open declaration of hostilities that demands careful thought. A major consideration is how imminent the rogue nation's bioattack is -- what makes our action either a preemptive or a preventative strike, and the legality of the latter (where there is no clear imminence) is currently under dispute.

From an ethics perspective, we might not be so quick to dismiss the BWC here, since that convention does not explicitly address nor rule out enhancement technologies. So, we may examine the ethics or principles underwriting the BWC to see what legal conclusions about enhancements ought to follow. It's unclear that the BWC's concern is limited to only microscopic agents: a bioengineered insect or animal may plausibly be of interest to the BWC; so why not also the human warfighter, especially if s/he is enhanced controversially, such as with a berserker-drug? Further, ethics would be concerned about risk posed by the enhancement to the soldier as well as to the local population. As an example, anabolic steroids already throw some users in fits of rage; if approved for use by soldiers, this performance-enhancer may lead to indiscriminate killings and abuse of civilians. A related issue is whether the soldier has given full and informed consent to an enhancement and its risks, and whether consent is even required in a military setting where coercion and commands are the norm.

From a policy perspective, we continue to be worried that our first use of any new weapon would "let the genie out of the bottle," setting a precedent for others to follow. Where our use of drone strikes today has been called cowardly and dishonorable by adversaries, imagine what they might say about enhanced human warfighters, perhaps unnatural abominations in their eyes. Deploying ground forces at all, unlike drones, also runs the risk that our personnel may be captured, creating another crisis.

Jump to comments
Presented by

Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a visiting associate professor at Stanford's School of Engineering; and an affiliate scholar at Stanford Law School. He is the lead editor of Robot Ethics and the co-author of What Is Nanotechnology and Why Does It Matter? and Enhanced Warfighters: Risk, Ethics, and Policy.

Get Today's Top Stories in Your Inbox (preview)

CrossFit Versus Yoga: Choose a Side

How a workout becomes a social identity


Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

CrossFit Versus Yoga: Choose a Side

How a workout becomes a social identity

Video

Is Technology Making Us Better Storytellers?

The minds behind House of Cards and The Moth weigh in.

Video

A Short Film That Skewers Hollywood

A studio executive concocts an animated blockbuster. Who cares about the story?

Video

In Online Dating, Everyone's a Little Bit Racist

The co-founder of OKCupid shares findings from his analysis of millions of users' data.

Video

What Is a Sandwich?

We're overthinking sandwiches, so you don't have to.

Video

Let's Talk About Not Smoking

Why does smoking maintain its allure? James Hamblin seeks the wisdom of a cool person.

Writers

Up
Down

More in Technology

Just In