From state-sponsored cyber attacks to autonomous robotic weapons, twenty-first century war is increasingly disembodied. Our wars are being fought in the ether and by machines. And yet our ethics of war are stuck in the pre-digital age.
We're used to thinking of war as a physical phenomenon, as an outbreak of destructive violence that takes place in the physical world. Bullets fly, bombs explode, tanks roll, people collapse. Despite the tremendous changes in the technology of warfare, it remained a contest of human bodies. But as the drone wars have shown, that's no longer true, at least for one side of the battle.
Technological asymmetry has always been a feature of warfare, but no
nation has ever been able to prosecute a war without any physical risk
to its citizens. What might the ability to launch casualty-free wars do
to the political barriers that stand between peace and conflict? In
today's democracies politicians are obligated to explain, at regular
intervals, why a military action requires the blood of a nation's young
people. Wars waged by machines might not encounter much skepticism in
the public sphere.
We just don't know what moral constraints should apply to these new kinds of warfare. Take the ancient, but still influential, doctrine of Just War
Theory, which requires that war's destructive forces be unleashed only
when absolutely necessary; war is to be pursued only as a last resort
and only against combatants, never against civilians.
But information warfare, warfare pursued with information technologies, distorts concepts like "necessity" and "civilian" in ways that challenge these ethical frameworks. An attack on another nation's information infrastructure, for instance, would surely count as an act of war. But what if it reduced the risk of future bloodshed? Should we really only consider it as a last resort? The use of robots further complicates things. It's not yet clear who should be held responsible if and when an autonomous military robot kills a civilian.
These are the questions that haunt the philosophers and ethicists that think deeply about information warfare, and they will only become more pertinent as our information technologies become more sophisticated. Mariarosaria Taddeo, a Marie Curie Fellow at the University of Hertforshire, recently published an article in Philosophy & Technology called "Information Warfare: A Philosophical Perspective" that addresses these questions and more. What follows is my conversation with Taddeo about how information technology is changing the way we wage war, and what philosophy is doing to catch up.
How do you define information warfare?
Taddeo: The definition of "information warfare" is hotly debated. From my perspective, for the purposes of philosophical analysis, it's best to define information warfare in terms of concrete forms, and then see if there is a commonality between those forms. One example would be cyber-attacks or hacker attacks, which we consider to be information warfare; another example would be the use of drones or semi-autonomous machines. From those instances, to me, a good definition of information warfare is "the use of information communication technologies within a military strategy that is endorsed by a state." And if you go to the Pentagon they will speak about this in different ways, they put it under different headings, in terms of information operations or cyber warfare, cyber attacks, that sort of thing.
Was Russia's attack on Estonia in 2007 the first broad-based state example of this?
Taddeo: The attack on Estonia is certainly one example of it, but it's only one instance, and it's not the first. You could, for example, point to the SWORDS robots that were used in Iraq several years prior to the attack on Estonia, or the use of predator drones, etc. Remember information warfare encompasses more than only information communication technologies used through the web; these technologies can be used in several different domains and in several different ways.
But it's hard to point to a definitive first example of this. It goes back quite a ways and these technologies have been evolving for sometime now; remember that the first Internet protocols were developed by DARPA---in some sense, these technologies were born in the military sphere. Turing himself, the father of computer science, worked mainly within military programs during the Second World War.
Interesting, but do I understand you correctly that you distinguish this new kind of information warfare from pre-internet information technologies like the radio and the telegraph?
Taddeo: Well those are certainly information technologies, and to some extent information has always been an important part of warfare, because we have always wanted to communicate and to destroy our enemies' information structures and communication capabilities. What we want to distinguish here is the use of these new kinds of information communication technologies, because they have proved to be much more revolutionary in their effects on warfare than previous technologies like telegraphs or telephones or radios or walkie-talkies.
What's revolutionary about them is that they have restructured the very reality in which we perceive ourselves as living in, and the way in which we think about the concepts of warfare or the state. Take for example the concept of the state: we currently define a state as a political unit that exercises power over a certain physical territory. But when you consider that states are now trying to also dominate certain parts of cyberspace, our definition becomes problematic because cyberspace doesn't have a defined territory. The information revolution is shuffling these concepts around in really interesting ways from a philosophical perspective, and more specifically, from an ethical perspective.
An Israeli soldier carries a drone. Reuters.
In your paper you mention the use of robotic weapons like drones as one example of the rapid development of information warfare. You note that the U.S. government deployed only 150 robotic weapons in Iraq in 2004, but that number had grown to 12,000 by 2008. Is this a trend you expect to continue?
Taddeo: I expect so. There are several ways that the political decisions to endorse or deploy these machines are encouraged by the nature of these technologies. For one they are quite a bit cheaper than traditional weapons, but more importantly they bypass the need for political actors to confront media and public opinion about sending young men and women abroad to risk their lives. These machines enable the contemplation of military operations that would have previously been considered too dangerous for humans to undertake. From a political and military perspective, the advantages of these weapons outweigh the disadvantages quite heavily.
But there are interesting problems that surface when you use them; for instance, when you have robots fighting a war in a foreign country, the population of that country is going to be slow to gain trust, which can make occupation or even just persuasion quite difficult. You can see this in Iraq or Afghanistan, where the populations have been slower to develop empathy for American forces because they see them as people who send machines to fight a war. But these shortcomings aren't weighty enough to convince politicians or generals to forgo the use of these technologies, and because of that I expect this trend towards the use of robotic weapons will continue.
You note the development of a new kind of robotic weapon, the SGR-A1, which is now being used by South Korea to patrol its border with North Korea. What distinguishes the SGR-A1 from previous weapons of information warfare?
Taddeo: The main difference is that this machine doesn't necessarily have a human operator, or a "man in the loop" as some have phrased it. It can autonomously decide to fire on a target without having to wait for a signal from a remote operator. In the past drones have been tele-operated, or if not, they didn't possess firing ability, and so there was no immediate risk that one of these machines could autonomously harm a human being. The fact that weapons like the SGR-A1 now exist tells us that there are questions that we need to confront. It's wonderful that we're able to save human lives on one side, our side, of a conflict, but the issues of responsibility, the issue of who is responsible for the actions of these semi-autonomous machines remain to be addressed.
Of course it's hard to develop a general rule for these situations where you have human nature filtered through the actions of these machines; it's more likely we're going to need a case-by-case approach. But whatever we do, we want to push as much of the responsibility as we can into the human sphere.
In your paper you say that information warfare is a compelling case of a larger shift toward the non-physical domain brought about by the Information Revolution. What do you mean by that?
Taddeo: It might make things more clear to start with the Information Revolution. The phrase "Information Revolution" is meant to convey the extraordinary ways that information communication technologies have changed our lives. There are of course plenty of examples of this, including Facebook and Twitter and that sort of thing, but what these technologies have really done is introduce a new non-physical space that we exist in, and, increasingly, it's becoming just as important as the offline or physical space---in fact events in this non-physical domain often affect events in the physical world.
Information warfare is one way that you can see the increasing importance of this non-physical domain. For example, we are now using this non-physical space to prove the power of our states---we are no longer only concerned with demonstrating the authority of our states only in the physical world.
In what ways might information warfare increase the risk of conflicts and human casualties?
Taddeo: It's a tricky question, because the risks aren't yet clear, but there is a worry that the number of conflicts around the world could increase because it will be easier for those who direct military attacks with the use of these technologies to do so, because they will not have to endanger the lives of their citizens to do so. As I mentioned before, information warfare is in this sense easier to wage from a political perspective.
It's more difficult to determine the effect on casualties. Information Warfare has the potential to be blood-free, but that's only one potentiality; this technology could just as easily be used to produce the kind of damage caused by a bomb or any other traditional weapon---just imagine what would happen if a cyber-attack was launched against a flight control system or a subway system. These dangerous aspects of information warfare shouldn't be underestimated; the deployment of information technology in warfare scenarios can be highly dangerous and destructive, and so there's no way to properly quantify the casualties that could result. This is one reason why we so badly need a philosophical and ethical analysis of this phenomenon, so that we can properly evaluate the risks.
This is an actual graphic that ran in Airman Magazine, the official magazine of the Air Force.
Part of your conception of information warfare is as an outgrowth of the Information Revolution. You draw on the work of Luciano Floridi, who has said that the Information Revolution is the fourth revolution, coming after the Copernican, Darwinian and the Freudian revolutions, which all changed the way humans perceive themselves in the Universe. Did those revolutions change warfare in interesting ways?
Taddeo: That's an interesting question. I don't think those revolutions had the kind of impact on warfare that we're seeing with the Information Revolution. Intellectual and technological revolutions seem to go hand in hand, historically, but I don't, to use one example, think that the Freudian Revolution had a dramatic effect on warfare. The First World War was waged much like the wars of the 19th century, and to the extent that it wasn't, those changes did not come about because of Freud.
What you find when you study those revolutions is that while they may have resulted in new technologies like the machine gun or the airplane, none of them changed the concept of war. Even the Copernican Revolution, which was similar to the Information Revolution in the sense that it dislocated our sense of ourselves as existing in a particular space and time, didn't have this effect. The concept of war remained intact in the wake of those revolutions, whereas we are finding that the concept of war itself is changing as a result of the Information Revolution.
How has the Information Revolution changed the concept of war?
Taddeo: It goes back to the shift to the non-physical domain; war has always been perceived as something distinctly physical involving bloodshed and destruction and violence, all of which are very physical types of phenomena. If you talk to people who have participated in warfare, historically, they will describe the visceral effects of it---seeing blood, hearing loud noises, shooting a gun, etc. Warfare was, in the past, always something very concrete.
This new kind of warfare is non-physical; of course it can still cause violence, but it can also be computer to computer, or it can be an attack on certain types of information infrastructure and still be an act of war. Consider the Estonian cyber-attack, where you had a group of actors launching an attack on institutional websites in Estonia; there were no physical casualties, there was no physical violence involved. Traditional war was all about violence; the entire point of it was to physically overpower your enemy. That's a major change. It shifts the ethical analysis, which was previously focused only on minimizing bloodshed. But when you have warfare that doesn't lead to any bloodshed, what sort of ethical framework are you going to apply?
For some time now, Just War Theory has been one of the main ethical frameworks for examining warfare. You seem to argue that its modes of analysis break down when applied to information warfare. For instance, you note that the principle that war ought only to be pursued "as a last resort" may not apply to information warfare. Why is that?
Taddeo: Well first I would say that as an ethical framework Just War Theory has served us well up to this point. It was first developed by the Romans, and from Aquinas on many of the West's brightest minds have contributed to it. It's not that it needs to be discarded; quite the contrary, there are some aspects of it that need to be kept as guiding principles going forward. Still, it's a theory that addresses warfare as it was known historically, as something very physical.
The problem with the principle of "last resort" is that while, yes, we want physical warfare to be the last choice after everything else, it might not be the case that information warfare is to be a "last resort," because it might actually prevent bloodshed in the long run. Suppose that a cyber-attack could prevent traditional warfare from breaking out between two nations; by the criteria of Just War Theory it would be an act of war and thus only justifiable as a last resort. And so you might not want to apply the Just War framework to warfare that is not physically violent.
You also note that the distinction between combatants and civilians is blurred in information warfare, and that this also has consequences for Just War Theory, which makes liberal use of that distinction. How so?
Taddeo: Well until a century ago there was a clear-cut distinction between the military and civilians---you either wear a uniform or you don't, and if you do, you are a justifiable military target. This distinction has been eroded over time, even prior to the Information Revolution; civilians took part in a number of twentieth century conflicts. But with information warfare the distinction is completely gone; not only can a regular person wage information warfare with a laptop, but also a computer engineer working for the U.S. government or the Russian government can participate in information warfare all day long and then go home and have dinner with his or her family, or have a beer at the pub.
The problem is, if we don't have any criteria, any way of judging who is involved in a war and who is not, then how do we respond? Who do we target? The risk is that our list of targets could expand to include people who we would now consider civilians, and that means targeting them with physical warfare, but also with surveillance, and that could be very problematic. Surveillance is a particularly thorny issue here, because if we don't know who we have to observe, we may end up scaling up our surveillance efforts to encompass entire populations and that could have very serious effects in the realm of individual rights.
You have identified the prevention of information entropy as a kind of first principle in an ethical framework that can be applied to information warfare---is that right, and if so, does that supplant the saving of human life as our usual first principle for thinking about these things?
Taddeo: I think they are complimentary. First of all, a clarification is in order. Information entropy has nothing to do with physics or information theory; it's not a physical or mathematical concept. Entropy here refers to the destruction of informational entities, which is something we don't want. It could be anything from destroying a beautiful painting, to launching a virus that damages information infrastructure, and it can also be killing a human being. Informational entities are not only computers; informational entities identify all existing things, seen from an informational perspective. In this sense an action generating entropy in the universe is an action that destroys, damages or corrupts a beautiful painting or damages information infrastructures, and it can also be killing a human being. Any action that makes the information environment worse off generates entropy and therefore is immoral. In this sense the prevention of information entropy is consistent with the saving of human life, because human beings contribute a great deal to the infosphere---killing a human being would generate a lot of information entropy.
This is all part of a wider ethical framework called Information Ethics, mainly developed by Luciano Floridi. Information Ethics ascribes a moral stance to all existing things. It does not have an ontological bias, that is to say it doesn't privilege certain sorts of beings. This does not mean that according to Information Ethics all things have the 'same' moral value but rather that they 'share' some common minimal rights and deserve some minimal respect. Here, the moral value of a particular entity would be proportional to its contributions to the information environment. So a white paper with one dot on it would have less moral value than say a book of poems, or a human being. That's one way of thinking about this.
Ross Andersen is a senior editor at The Atlantic, where he oversees the Science, Technology, and Health sections. He was previously deputy editor of Aeon Magazine.
A former member of a tight-knit college prayer group describes his community's disintegration—and how one of itsmembers ended up dead.
November 2, 2012, was a beautiful Friday in Kansas City—clear and cool and sunny. I had spent the afternoon reading in the library at an unaccredited college affiliated with the International House of Prayer, an evangelical Christian organization commonly referred to as IHOP (no relation to the restaurant). Around 6 pm, I got a call from my friend Hannah*.
“I found out something that’s truly devastating. I didn’t want to tell you this way, but I want you to know,” she said. “Bethany Leidlein committed suicide on Tuesday.”
I was shocked. For seven years, I had spent hours every day with Bethany, eating and talking and praying. We had been best friends. She was 27, newly married; she had just completed her nursing degree. I felt like she would always be part of my life.
A No. 1 bestseller by a respected physician argues that gluten and carbohydrates are at the root of Alzheimer's disease, anxiety, depression, and ADHD. What to make of the controversial theory?
“If you could make just three simple changes in your life to prevent, or even reverse, memory loss and other brain disorders, wouldn’t you?”
So asks Dr. David Perlmutter, in promotion of his PBS special Brain Change, coming soon to your regional affiliate. Three changes. Simple ones. Wouldn’t you?
The 90-minute special is a companion to Perlmutter’s blockbuster book on how gluten and carbs are destroying our brains. In November it became a New York Times number one bestseller. Since its September release, as Perlmutter told me, “It’s never not been on the bestseller list, frankly.”
“Is it still number one?” I asked. A pause over the phone as he checked. In modern interview style, we were both also on our computers.
The U.S. president talks through his hardest decisions about America’s role in the world.
Friday, August 30, 2013, the day the feckless Barack Obama brought to a premature end America’s reign as the world’s sole indispensable superpower—or, alternatively, the day the sagacious Barack Obama peered into the Middle Eastern abyss and stepped back from the consuming void—began with a thundering speech given on Obama’s behalf by his secretary of state, John Kerry, in Washington, D.C. The subject of Kerry’s uncharacteristically Churchillian remarks, delivered in the Treaty Room at the State Department, was the gassing of civilians by the president of Syria, Bashar al-Assad.
Amidst the no-shampoo revolution, a look at global hygiene habits
Cleanliness, it turns out, has been one dirty trick. One reason early-20th-century Americans ramped up their weekly baths to daily showers is that marketing companies capitalized on the insecurities of a new class of office drones working in close quarters. As Gizmodo wrote last week, to sell products like "toilet soap" and Listerine to Americans, "the advertising industry had to create pseudoscientific maladies like 'bad breath' and 'body odor.'"
Take, for instance, Gizmodo's description of the philosophy of the Cleanliness Institute, which was founded by the Association of American Soap and Glycerine Producers:
The trade association wanted Americans to to wash quite unwittingly after toilet, to wash without thought before eating, to jump into the tub as automatically as one might awake each new day.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
A gender-studies professor explains how the industry works.
Humans have been creating images of sex and genitalia for millions of years, but it is only in the past few centuries—since the 1600s, according to historians—that these representations started meeting academics’ preferred definition of pornography, which involves both the violation of taboos and the intention of arousal. The first efforts to make money off of this new endeavor could not have come long after that.
With the publication of Playboy and Hustler in the mid-20th-century, porn started going corporate, and the industry has since bloomed into an enterprise so vast that people have a hard time estimating its size. Like any other industry, porn has its shady qualities—labor abuses, content piracy, and a blemished supply chain, to name a few. But unlike nearly any other industry, these unseemly features are allowed to thrive, mostly unchecked, behind the curtain of social taboo.
When money becomes information, it can inform on you.
In 2014, Cass Sunstein—one-time “regulatory czar” for the Obama administration—wrote an op-ed advocating for a cashless society, on the grounds that it would reduce street crime. His reasoning? A new study had found an apparent causal relationship between the implementation of the Electronic Benefit Transfer system for welfare benefits, and a drop in crime.
Under the new EBT system, welfare recipients could now use debit cards, rather than being forced to cash checks in their entirety—meaning there was less cash circulating in poor neighborhoods. And the less cash there was on the streets, the study’s authors concluded, the less crime there was.
Perhaps burglaries, larcenies, and assaults had gone down because there was simply less to readily steal. Perhaps, also, the debit cards deterred people from spending money on drugs and other black market goods. While nothing was really stopping them from withdrawing cash and then spending it illegally, the famous Sunsteinian Nudge was in effect—the very slightest friction in the environment pushed people away from committing crime.
Public schools in the United States aren’t teaching students how to engage diverse opinions.
Little hands. A bad tan. And blood coming from wherever.
If you’re put off by the crude tone of politics in the Age of Trump, you’re not alone. According to a recent poll by Weber Shandwick, Powell Tate, and KRC Research, 70 percent of Americans think that political incivility has reached “crisis” levels.
The poll also found that Americans avoid discussing controversial questions, out of fear they too will be perceived as uncivil. The findings speak to a flaw with civic education, especially in the main institution charged with delivering it: public schools. Put simply, schools in the United States don’t teach the country’s future citizens how to engage respectfully across their political differences. So it shouldn’t be surprising that they can’t, or that that they don’t.
The Starz drama is a chilly, fascinating portrait of a law student who doubles as a high-class escort.
Early in the first episode of The Girlfriend Experience, Christine (Riley Keough) briefly describes how to please prospective clients and benefactors. “You just say their own words back to them,” she tells a friend, flatly. “It’s what they want.”
In that moment, she’s attending a job fair at her law school, and explaining why she’s spent so much time memorizing obscure medical-product jargon (so she can recite it to the recruiters looking for interns and persuade them how passionate she is about their painfully boring work). But the statement could just as easily apply to Christine’s philosophy regarding another job she’s mulling by the end of the episode: high-class escort. Ostensibly, The Girlfriend Experience (named for the customized, extremely expensive service Christine provides) is a show about a young woman pursuing a career path in the highest echelons of sex work, but really it’s an examination of the human desire for power, which looks remarkably similar as it manifests in boardrooms and bedrooms throughout the series.
Teri Buford O'Shea fled Jonestown just three weeks before all its inhabitants committed suicide. Here, she describes explains why the tragedy should be a cautionary tale for everyday people.
Teri Buford O'Shea fled Jonestown three weeks before all its inhabitants committed suicide. Here, she explains why the tragedy should be a cautionary tale for everyday people.
Teri Buford O'Shea
On November 18, 1978, Jim Jones and more than 900 members of
his People's Temple committed mass suicide in the jungle of Guyana. Since that
time, the event has occupied a grotesque but fringy place in American history. Jones's
followers are imagined as wide-eyed innocents, swallowing his outrageous teachings
along with his cyanide-laced Kool-Aid. Teri Buford O'Shea remembers things quite
differently.
O'Shea was 19 years old when she joined the People's Temple
in Redwood Valley, California. It was 1971, and O'Shea was homeless when a man pulled
up alongside her in a van. He told her about the community where he lived -- a
place, he said, where no one had to worry about food or housing. The leader was
a visionary who was building a new future. O'Shea gladly took the ride. After
all, she assumed, if she didn't like the People's Temple, she could always
leave.