In the year 2025, a rogue state--long suspected of developing biological weapons--now seems intent on using them against U.S. allies and interests. Anticipating such an event, we have developed a secret "counter-virus" that could infect and destroy their stockpile of bioweapons. Should we use it?
From a legal standpoint, it seems pretty straightforward that use of bioweapons by either party would violate the Biological Weapons Convention (BWC) and therefore be prohibited. From a policy standpoint, our first use of a bioweapon sets a most dangerous precedent for other states to justify their use of bioweapons--the very thing we want to prevent. But from an ethics standpoint, leaving the BWC aside, perhaps we are merely "practicing medicine" with a counter-virus--a vaccination of sorts--and surely this seems to be permissible.
A proper analysis, of course, goes much further. The lawyer also may point out that we've already violated the BWC by developing a counter-virus for war. The ethicist may argue that other options seem worse: A Stuxnet-like cyberattack may accidentally knock out containment systems and release the bioweapons, and incendiary bombs would kill both pathogens and people, potentially triggering an open war. The policy advisor, on the other hand, may prefer a more devastating, disproportionate attack to send a message to the world: We will not tolerate bio-threats, so don't even try it. But would this policy fuel animosity and thus be counterproductive to peace?
A new kind of wargaming
These are part of actual discussions last month at an unusual wargame sponsored by the U.S. Department of Defense's Rapid Reaction Technology Office and the U.S. Naval Academy. Hosted by the consultancy Noetic Group and directed by Dr. Peter W. Singer--author of Wired for War, a book partially responsible for raising global awareness about the "drone war" and its controversies--the event was part of the NeXTech wargaming series focused on emerging and future military technologies.
The technologies of interest are potential "game-changers": biotechnologies (e.g., human enhancements), energy (e.g., lasers and superefficient batteries), materials (e.g., 3D printing), hardware (e.g., robots), and software (e.g., electromagnetic and cyberweapons). But this particular wargame was dedicated to their ethics, policy, and legal issues, helping to identify friction points as well as to test how they can be integrated better in national-security planning and military-technology development.
As an invited participant to that and previous wargames, I will describe some of the scenarios we considered and their issues, which also gives a glimpse of the work that's still left to do. First, because the event was held under the Chatham House Rule, I won't attribute statements to any particular person. I will also simplify these scenarios and discussions, including substantial disagreements, for the sake of exposition.
To begin with, let's quickly define ethics, policy, and law, since this essay is about their interplay and surprising differences. Many decision-makers in both business and government treat ethics, policy, and law as if they were the same animal; and even if understood to be different, they are usually seen as walking lockstep with each other. But philosophers have long appreciated this distinction: what is legal isn't always ethical (such as the death penalty, some would say), and what is unethical isn't always illegal (such as adultery). Likewise, effective policy might be illegal or unethical (such as mutually assured destruction), and well-intentioned law or ethics may drive bad policies (such as a war on drugs, again some would say).
Contrary to popular opinion, ethics is more than "gut reactions" or intuitions, but it's about drawing out and applying broader principles that ought to guide our actions, such as maximizing happiness, respecting autonomy, doing no harm, or treating others as you'd want to be treated. Policy, in contrast, often takes a more pragmatic or realist approach, giving much weight to broader effects with an eye toward achieving certain goals; and so policy could diverge from ethics. And law is about complying with rules established and enforced by governments, under penalty of punishment. (As incomplete as these definitions are, they should be enough for our purposes here.)
In the following scenarios of future warfare, circa year 2025, I will tease out tensions among the three areas.
Scenario: Pain rays
Imagine if you were a U.S. soldier posted at a road checkpoint in a foreign land. You order an approaching car to halt, but the driver continues toward you. Maybe he doesn't understand you, or maybe he's a suicide-bomber. Either way, when he fails to comply one more time, you are left with little choice but to aim your rifle and shoot the driver. A "pain-ray"--or a microwave-like energy beam that causes pain so unbearable that any person would run away from it--would have been handy in this situation and others, such as crowd control. With that technology, we wouldn't need to escalate so quickly to deadly force, and that's good for us and the target.
This future is already here. We've had the pain-ray for decades but never used it in a conflict. Why? The answer reveals how ethics, policy, and law are critical to decisions to field a new technology or not, and why the Department of Defense is focusing more intently on these areas.
From a legal perspective, to use the pain-ray--as the U.S. military's Active Denial System has been called--against a mob that may contain both true combatants and merely angry protestors is likely illegal in war. A bedrock principle in the laws of armed conflict (LOAC), which is related to international humanitarian law and includes the Geneva and Hague Conventions, is the principle of distinction: Warring parties are never permitted to intentionally target noncombatants, even with nonlethal weapons and for the target's own good. End of story.
From an ethics perspective, it gets complicated. If truly safe, the benefits of a pain-ray are highly desirable: We'd have a nonlethal option between shouting and shooting, which would be better for foreign relations. And certainly it's better to cause temporary pain than to mortally wound. At the same time, observing established laws and norms is important. This suggests that we might want to clarify or reconsider the principle of distinction, if we think such nonlethal weapons ought to be allowed, all things considered.
From a policy perspective, how adversaries perceive the weapon also matters. If the pain-ray is seen as inhumane, then it could escalate, not defuse, a situation. It could make an agitated person even angrier, as inflicting pain often does. Importantly, the Active Denial System fell victim to bad public relations: Media sources reported a range of possible and invented harms, from eye damage and other burns to death and disfigurement, such as shrinking a body to half its size. Adversaries decried the weapon as "cooking" its targets alive. Critics worried that it could be abused, such as forcing enemies out of a bunker in order to shoot them, or for torture.
Currently, the Active Denial System is still sidelined, despite more than $100 million in development costs--tremendous costs and efforts we perhaps could have saved if we had engaged these and other issues earlier, as many in the defense community are coming to understand.
Scenario: Swarm robots
Back to the opening scenario above, let's consider another option besides the counter-virus. Suppose that we want more evidence before we launch an attack: We want confirmation that the rogue nation really is stockpiling bioweapons and has hostile intentions. We have developed autonomous microsystems--stealthy robot bugs--that can undertake intelligence gathering, such as capturing video and technical information. Further, the robots can conduct "swarming sabotage" if needed, targeting no personnel but eating away at key production materials, like a plague of locusts. Should we deploy these micro robots?
From a legal perspective, it matters whether we're initiating a military action or an intelligence mission. If it's a military action (conducted under Title 10 authority of the United States Code), our robot bugs may be viewed as an attack and therefore provoke an aggressive response, perhaps triggering the biowarfare that we sought to avoid. But if it's an intelligence operation (conducted under Title 50 authority), we could better avoid escalating the crisis, as espionage usually isn't met with military force--it's just part of the games that nations play. (This issue is presently a concern for U.S. cyber-operations; for instance, is our hacking a use of force, or is it merely spying?) If the bugs are set to swarm mode, the attack must be attributable to us. This is required by the laws of armed conflict, in part so that an innocent third-party isn't blamed and subject to counterattack. But we would resist admitting our involvement if possible, since a clandestine strike (even if illegal) protects us against retaliation.
From an ethics perspective, we may be worried about the reliability of the robots. Will they work as advertised, that is, eat only production materials and not harm people? Who would be responsible if a robot bug malfunctions, runs amok, and attacks a person, say, an innocent child? If it's a programming error, perhaps we'd hold the manufacturer responsible; or if the environment was ill-suited for these robots in the first place and led to an accident, then maybe the commanding officer would be blamed--or even the president himself as commander-in-chief. But what if the malfunction was unforeseeable, such as a damaged computer chip from being swatted or shot at? We could stipulate that the commanding officer is still responsible, but this doesn't seem quite fair. The responsibility chain thus needs to be much clearer.
From a policy perspective, we could be setting a precedent that opens ourselves to both spy- and sabotage-robot invasions, in addition to unattributed stealth attacks. Still, this may be better than openly attacking with incendiary bombs, a clear use of force that is more easily attributed to us and which virtually guarantees retaliation.
Staying with the bioweapons scenario, suppose we decide to gather more information before conducting any attack, without resorting to our robot bugs. We've also developed biological markers that can be used to tag, track, and locate the key individuals involved with running the rogue nation's bioweapons program. Injected into an unsuspecting person from a distance by laser beams, these biomarkers communicate with satellites and can be used for information operations, intelligence activity, as well as direct action when it comes time for a strike. Should we tag those individuals with biomarkers?
From a legal standpoint, this option seems to avoid earlier problems with the Biological Weapons Convention, as the biomarkers are not weapons themselves. But we may run into the distinction problem again, as we had with pain-rays: The individuals we tag (i.e., shoot with biomarkers) might not all be combatants. Some may be innocent truck drivers, for instance, who are critical links to the production process and can lead us to key locations; they may be unaware that they're even transporting materials for bioweapons. We must distinguish combatants from noncombatants in an attack, but must we do so in a biotagging process? While we may be intentionally aiming at noncombatant truck drivers, again our projectile does not seem to be a weapon at all, but an unobtrusive tracking device. It's unclear whether this makes a difference for the principle of distinction or BWC. On the other hand, if this is an intelligence operation under Title 50 authority, and not a military operation, then LOAC does not come into play.
From an ethics standpoint, this option might not seem much different from other intelligence, surveillance, and reconnaissance (ISR) operations that are permissible: we're just following some people around. Does it really matter whether we're aiming telescopes at them or biomarker laser-rifles, if neither does any injury? If any of the targets are U.S. citizens, however, then domestic U.S. privacy law and ethics may apply where it involves collected data. The ethicist also would be concerned about the risk profile of the biomarker on human health, as well as the risk of accidentally shooting into a target's eye or other untested areas.
From a policy standpoint, adversaries may react badly, comparing our operation to the tagging of animals. Our treatment of their people, they might say, is inhumane or disrespectful. And this would ignite resentment and help recruitment of more sympathizers to their cause.
Scenario: Soldier enhancements
In the same scenario, suppose we have now gathered enough evidence to be confident that the rogue nation indeed plans to threaten us with a bioattack. The bioweapons program, however, resides deep underground on a mountainside. As an alternative to a tactical nuclear strike, we have developed a vaccine against the pathogen and inoculated a special operations unit with it. Further, this unit has been physically and cognitively enhanced--able to easily stay awake for days and twice as strong as a normal soldier--in order to traverse the difficult terrain, infiltrate the underground facility, and take down the bioweapons program with a reasonable probability of success. Should we deploy the enhanced unit?
From a legal perspective, we again seem to avoid earlier problems with the BWC, since human enhancements are not weapons, even if they are biologically based technologies. For instance, the BWC isn't concerned with regulating vaccines, anabolic steroids, or "smart drugs." But sending in a combat unit to destroy the bioweapons program clearly would be a use of force, and this is an open declaration of hostilities that demands careful thought. A major consideration is how imminent the rogue nation's bioattack is -- what makes our action either a preemptive or a preventative strike, and the legality of the latter (where there is no clear imminence) is currently under dispute.
From an ethics perspective, we might not be so quick to dismiss the BWC here, since that convention does not explicitly address nor rule out enhancement technologies. So, we may examine the ethics or principles underwriting the BWC to see what legal conclusions about enhancements ought to follow. It's unclear that the BWC's concern is limited to only microscopic agents: a bioengineered insect or animal may plausibly be of interest to the BWC; so why not also the human warfighter, especially if s/he is enhanced controversially, such as with a berserker-drug? Further, ethics would be concerned about risk posed by the enhancement to the soldier as well as to the local population. As an example, anabolic steroids already throw some users in fits of rage; if approved for use by soldiers, this performance-enhancer may lead to indiscriminate killings and abuse of civilians. A related issue is whether the soldier has given full and informed consent to an enhancement and its risks, and whether consent is even required in a military setting where coercion and commands are the norm.
From a policy perspective, we continue to be worried that our first use of any new weapon would "let the genie out of the bottle," setting a precedent for others to follow. Where our use of drone strikes today has been called cowardly and dishonorable by adversaries, imagine what they might say about enhanced human warfighters, perhaps unnatural abominations in their eyes. Deploying ground forces at all, unlike drones, also runs the risk that our personnel may be captured, creating another crisis.
Even straightforward, more conventional scenarios give rise to dilemmas, such as this one: A hostile nation has sent warships toward some islands in a territorial dispute. The U.S. is committed to defending those islands, but we'd rather not deploy personnel or engage in an offensive attack. We considered using defensive robots--such as "smart mines" that attack only enemy ships in a security zone--but we're still concerned about reliability and responsibility issues mentioned above. For instance, we can't be certain enough that a damaged smart mine won't attack an illegal target, say, a fishing boat, or travel outside the minefield. But an "e-bomb" may be a better option, a weapon that releases electromagnetic pulses (EMP) to disable all electronics around a target. With it, we could stop the warships in their tracks, without resorting to physical, provocative force. Should we use the e-bomb?
From a legal standpoint, since there are no civilians nearby, we don't need to be worried about the principle of distinction. Or do we? If a hospital ship were traveling nearby, we'd generally need to take care to not harm or disable that vessel, following the Geneva Conventions. More importantly, it's still in dispute whether an electronic (as well as cyber) attack counts as a "use of force" under LOAC, such as article 2(4) of the UN Charter. If it does, even an e-attack could provoke an armed counterstrike, and then the war is on. So far, the enemy warships have only been sailing, and it may be unclear whether a hostile invasion was really imminent in the first place, that is, whether we can appeal to a right to self-defense, as allowed by article 51 of the UN Charter.
From an ethics standpoint, if we want to avoid provocation, we need to consider unintended effects of an e-bomb. Because a warship is a complex technical system, turning off its power may indirectly cause harm or death to sailors aboard, and this can elevate the crisis into an actual armed conflict. But assuming that war has begun and we are engaged in self-defense, an e-bomb seems preferable to a kinetic attack that would certainly harm property and persons, if they both get the same job done: stopping the warships. It may even be preferable to a cyberweapon, the effects of which we can't be certain, including its scope, immediacy, and possible proliferation "into the wild" or to civilian systems such as our own. Unlike an e-bomb, a cyberattack would give adversaries a blueprint or ideas to design a similar weapon that can be used against our systems.
From a policy standpoint, the U.S. may be worried about the proliferation of e-bombs above any other technology, because we'd have the most to lose with the world's most wired military. We have more self-interest to not set a precedent here than perhaps with any other weapon system. Further, adversaries might begin to anticipate our e-bomb campaigns and implement a "dead hand" deterrence system to protect their assets--a system that automatically launches (or ceases to hold back) an attack on us, in the event of a total power loss. And this co-evolution of hunter and prey makes war even more dangerous that it already was.
Untangling ethics, policy, and law
From the above scenarios, based on the NeXTech wargames, we can see that ethics, policy, and law may come to radically different conclusions. When they do converge on a solution, they often focus on different issues. Perhaps in an ideal world, there's a syzygy or alignment of the three areas: Policies and law should be ethical. The real world, though, is messy. It's difficult to pin down and integrate analysis from the three disciplines, each an art and science unto itself.
In wargaming, we saw substantial disagreements not just at the intersections of ethics, policy, and law but also within each community, adding to the complexity of the exercise. These areas of contention are important to explore for decision-makers to have a broader perspective and more options, and this is crucial in a dynamic, complex world that is unlikely to be captured by a single perspective.
So it's encouraging that the U.S. and other militaries are showing more interest in these areas. War is one of the most ethically problematic areas of human life. As such, there is much humanitarian and practical value in accounting for ethics, policy, and law--especially around emerging military technologies that give rise to novel scenarios and issues. Beyond sparing civilians from harm and safeguarding human rights, a commitment to ethics and the rule of law is what sets apart a military, with honor and professionalism, from a band of mercenaries.
As we have learned from the U.S.-Vietnam war and arguably current drone-strike campaigns, superior technology by itself is not enough for victory. Winning "hearts and minds" matters for a lasting peace, and this is difficult to achieve if a war is prosecuted unethically or illegally. Failing to think ahead about ethics, policy, and law could also deal serious blows to national reputations and key military programs, from pain-rays to drones to cyberweapons and more, all presently controversial and under debate.
The short analyses I presented above are far from complete. The NeXTech wargames were meant to kickstart a conversation, helping to understand the work in front of us rather than attempting to anticipate every scenario and offer clear solutions.
We still need to examine the issues more fully and methodically in a "whole-systems" approach. In wargaming with law professors, JAG lawyers, policy advisers, philosophers, theologians, and other domain experts, we saw the value of their different perspectives to the conversation. We also saw the need for scientists, technologists, futurists, journalists, military officers, as well as cadets and midshipmen (who will be on the frontlines of these next-generation weapons) at the table to ensure the conversation is guided by realism.
Noetic's NeXTech is unique with its wargaming methodology, and other efforts exist as well--such as by the National Research Council, Naval Academy, and Chautauqua Council--to cross-pollinate expertise and to engage the broader public on these weighty issues, a vital part of democracy. So we already have a nice head start and, with that momentum, now just need to keep running.
Not only do ethics, policy, and legal experts believe these issues are urgent, but cultural and religious communities also want to participate. And all of these stakeholders will engage the debate with or without the participation of the defense establishment, whether government or industry. Without that participation, decision-makers lose a valuable opportunity to help frame the debate, address public fears, and make better informed calls about a new technology and its risks. The Active Denial System again is just one poster-child of this lesson.
It will take work to integrate ethics, policy, and law into national security planning and military technology development, particularly as the emerging technologies aren't here yet. But the future may come sooner than we think, and we are always surprised. We hope that the U.S. and other governments have the foresight and commitment to stay with this challenge. It's not a bridge too far, but one that is worth the effort.
Acknowledgements: Some of this research has been supported by NeXTech wargames, The Greenwall Foundation, US Naval Academy, Office of Naval Research, and California Polytechnic State University. I thank Keith Abney, Brad Allenby, Ben Fitzgerald, George R. Lucas, Jr., Peter W. Singer, Wendell Wallach, and John Watts for reviewing this essay. The statements expressed here are the author's alone and do not necessarily reflect the views of the aforementioned persons or organizations.