Pain Rays and Robot Swarms: The Radical New War Games the DOD Plays

From an ethics standpoint, this option might not seem much different from other intelligence, surveillance, and reconnaissance (ISR) operations that are permissible: we're just following some people around. Does it really matter whether we're aiming telescopes at them or biomarker laser-rifles, if neither does any injury? If any of the targets are U.S. citizens, however, then domestic U.S. privacy law and ethics may apply where it involves collected data. The ethicist also would be concerned about the risk profile of the biomarker on human health, as well as the risk of accidentally shooting into a target's eye or other untested areas.

From a policy standpoint, adversaries may react badly, comparing our operation to the tagging of animals. Our treatment of their people, they might say, is inhumane or disrespectful. And this would ignite resentment and help recruitment of more sympathizers to their cause.

Scenario: Soldier enhancements

linart2-thumb-615x439-109908.jpg Alexis C. Madrigal

In the same scenario, suppose we have now gathered enough evidence to be confident that the rogue nation indeed plans to threaten us with a bioattack. The bioweapons program, however, resides deep underground on a mountainside. As an alternative to a tactical nuclear strike, we have developed a vaccine against the pathogen and inoculated a special operations unit with it. Further, this unit has been physically and cognitively enhanced--able to easily stay awake for days and twice as strong as a normal soldier--in order to traverse the difficult terrain, infiltrate the underground facility, and take down the bioweapons program with a reasonable probability of success. Should we deploy the enhanced unit?

From a legal perspective, we again seem to avoid earlier problems with the BWC, since human enhancements are not weapons, even if they are biologically based technologies. For instance, the BWC isn't concerned with regulating vaccines, anabolic steroids, or "smart drugs." But sending in a combat unit to destroy the bioweapons program clearly would be a use of force, and this is an open declaration of hostilities that demands careful thought. A major consideration is how imminent the rogue nation's bioattack is -- what makes our action either a preemptive or a preventative strike, and the legality of the latter (where there is no clear imminence) is currently under dispute.

From an ethics perspective, we might not be so quick to dismiss the BWC here, since that convention does not explicitly address nor rule out enhancement technologies. So, we may examine the ethics or principles underwriting the BWC to see what legal conclusions about enhancements ought to follow. It's unclear that the BWC's concern is limited to only microscopic agents: a bioengineered insect or animal may plausibly be of interest to the BWC; so why not also the human warfighter, especially if s/he is enhanced controversially, such as with a berserker-drug? Further, ethics would be concerned about risk posed by the enhancement to the soldier as well as to the local population. As an example, anabolic steroids already throw some users in fits of rage; if approved for use by soldiers, this performance-enhancer may lead to indiscriminate killings and abuse of civilians. A related issue is whether the soldier has given full and informed consent to an enhancement and its risks, and whether consent is even required in a military setting where coercion and commands are the norm.

From a policy perspective, we continue to be worried that our first use of any new weapon would "let the genie out of the bottle," setting a precedent for others to follow. Where our use of drone strikes today has been called cowardly and dishonorable by adversaries, imagine what they might say about enhanced human warfighters, perhaps unnatural abominations in their eyes. Deploying ground forces at all, unlike drones, also runs the risk that our personnel may be captured, creating another crisis.

Scenario: E-bombs

p017847c.jpgBoeing concept for a cruise missile with an electromagnetic warhead (Boeing)

Even straightforward, more conventional scenarios give rise to dilemmas, such as this one: A hostile nation has sent warships toward some islands in a territorial dispute. The U.S. is committed to defending those islands, but we'd rather not deploy personnel or engage in an offensive attack. We considered using defensive robots--such as "smart mines" that attack only enemy ships in a security zone--but we're still concerned about reliability and responsibility issues mentioned above. For instance, we can't be certain enough that a damaged smart mine won't attack an illegal target, say, a fishing boat, or travel outside the minefield. But an "e-bomb" may be a better option, a weapon that releases electromagnetic pulses (EMP) to disable all electronics around a target. With it, we could stop the warships in their tracks, without resorting to physical, provocative force. Should we use the e-bomb?

From a legal standpoint, since there are no civilians nearby, we don't need to be worried about the principle of distinction. Or do we? If a hospital ship were traveling nearby, we'd generally need to take care to not harm or disable that vessel, following the Geneva Conventions. More importantly, it's still in dispute whether an electronic (as well as cyber) attack counts as a "use of force" under LOAC, such as article 2(4) of the UN Charter. If it does, even an e-attack could provoke an armed counterstrike, and then the war is on. So far, the enemy warships have only been sailing, and it may be unclear whether a hostile invasion was really imminent in the first place, that is, whether we can appeal to a right to self-defense, as allowed by article 51 of the UN Charter.

Presented by

Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a visiting associate professor at Stanford's School of Engineering; and an affiliate scholar at Stanford Law School. He is the lead editor of Robot Ethics and the co-author of What Is Nanotechnology and Why Does It Matter? and Enhanced Warfighters: Risk, Ethics, and Policy.

How to Cook Spaghetti Squash (and Why)

Cooking for yourself is one of the surest ways to eat well. Bestselling author Mark Bittman teaches James Hamblin the recipe that everyone is Googling.

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

How to Cook Spaghetti Squash (and Why)

Cooking for yourself is one of the surest ways to eat well.

Video

Before Tinder, a Tree

Looking for your soulmate? Write a letter to the "Bridegroom's Oak" in Germany.

Video

The Health Benefits of Going Outside

People spend too much time indoors. One solution: ecotherapy.

Video

Where High Tech Meets the 1950s

Why did Green Bank, West Virginia, ban wireless signals? For science.

Video

Yes, Quidditch Is Real

How J.K. Rowling's magical sport spread from Hogwarts to college campuses

Video

Would You Live in a Treehouse?

A treehouse can be an ideal office space, vacation rental, and way of reconnecting with your youth.

More in Technology

Just In