From state-sponsored cyber attacks to autonomous robotic weapons, twenty-first century war is increasingly disembodied. Our wars are being fought in the ether and by machines. And yet our ethics of war are stuck in the pre-digital age.
We're used to thinking of war as a physical phenomenon, as an outbreak of destructive violence that takes place in the physical world. Bullets fly, bombs explode, tanks roll, people collapse. Despite the tremendous changes in the technology of warfare, it remained a contest of human bodies. But as the drone wars have shown, that's no longer true, at least for one side of the battle.
Technological asymmetry has always been a feature of warfare, but no
nation has ever been able to prosecute a war without any physical risk
to its citizens. What might the ability to launch casualty-free wars do
to the political barriers that stand between peace and conflict? In
today's democracies politicians are obligated to explain, at regular
intervals, why a military action requires the blood of a nation's young
people. Wars waged by machines might not encounter much skepticism in
the public sphere.
We just don't know what moral constraints should apply to these new kinds of warfare. Take the ancient, but still influential, doctrine of Just War
Theory, which requires that war's destructive forces be unleashed only
when absolutely necessary; war is to be pursued only as a last resort
and only against combatants, never against civilians.
But information warfare, warfare pursued with information technologies, distorts concepts like "necessity" and "civilian" in ways that challenge these ethical frameworks. An attack on another nation's information infrastructure, for instance, would surely count as an act of war. But what if it reduced the risk of future bloodshed? Should we really only consider it as a last resort? The use of robots further complicates things. It's not yet clear who should be held responsible if and when an autonomous military robot kills a civilian.
These are the questions that haunt the philosophers and ethicists that think deeply about information warfare, and they will only become more pertinent as our information technologies become more sophisticated. Mariarosaria Taddeo, a Marie Curie Fellow at the University of Hertforshire, recently published an article in Philosophy & Technology called "Information Warfare: A Philosophical Perspective" that addresses these questions and more. What follows is my conversation with Taddeo about how information technology is changing the way we wage war, and what philosophy is doing to catch up.
How do you define information warfare?
Taddeo: The definition of "information warfare" is hotly debated. From my perspective, for the purposes of philosophical analysis, it's best to define information warfare in terms of concrete forms, and then see if there is a commonality between those forms. One example would be cyber-attacks or hacker attacks, which we consider to be information warfare; another example would be the use of drones or semi-autonomous machines. From those instances, to me, a good definition of information warfare is "the use of information communication technologies within a military strategy that is endorsed by a state." And if you go to the Pentagon they will speak about this in different ways, they put it under different headings, in terms of information operations or cyber warfare, cyber attacks, that sort of thing.
Was Russia's attack on Estonia in 2007 the first broad-based state example of this?
Taddeo: The attack on Estonia is certainly one example of it, but it's only one instance, and it's not the first. You could, for example, point to the SWORDS robots that were used in Iraq several years prior to the attack on Estonia, or the use of predator drones, etc. Remember information warfare encompasses more than only information communication technologies used through the web; these technologies can be used in several different domains and in several different ways.
But it's hard to point to a definitive first example of this. It goes back quite a ways and these technologies have been evolving for sometime now; remember that the first Internet protocols were developed by DARPA---in some sense, these technologies were born in the military sphere. Turing himself, the father of computer science, worked mainly within military programs during the Second World War.
Interesting, but do I understand you correctly that you distinguish this new kind of information warfare from pre-internet information technologies like the radio and the telegraph?
Taddeo: Well those are certainly information technologies, and to some extent information has always been an important part of warfare, because we have always wanted to communicate and to destroy our enemies' information structures and communication capabilities. What we want to distinguish here is the use of these new kinds of information communication technologies, because they have proved to be much more revolutionary in their effects on warfare than previous technologies like telegraphs or telephones or radios or walkie-talkies.
What's revolutionary about them is that they have restructured the very reality in which we perceive ourselves as living in, and the way in which we think about the concepts of warfare or the state. Take for example the concept of the state: we currently define a state as a political unit that exercises power over a certain physical territory. But when you consider that states are now trying to also dominate certain parts of cyberspace, our definition becomes problematic because cyberspace doesn't have a defined territory. The information revolution is shuffling these concepts around in really interesting ways from a philosophical perspective, and more specifically, from an ethical perspective.
An Israeli soldier carries a drone. Reuters.
In your paper you mention the use of robotic weapons like drones as one example of the rapid development of information warfare. You note that the U.S. government deployed only 150 robotic weapons in Iraq in 2004, but that number had grown to 12,000 by 2008. Is this a trend you expect to continue?
Taddeo: I expect so. There are several ways that the political decisions to endorse or deploy these machines are encouraged by the nature of these technologies. For one they are quite a bit cheaper than traditional weapons, but more importantly they bypass the need for political actors to confront media and public opinion about sending young men and women abroad to risk their lives. These machines enable the contemplation of military operations that would have previously been considered too dangerous for humans to undertake. From a political and military perspective, the advantages of these weapons outweigh the disadvantages quite heavily.
But there are interesting problems that surface when you use them; for instance, when you have robots fighting a war in a foreign country, the population of that country is going to be slow to gain trust, which can make occupation or even just persuasion quite difficult. You can see this in Iraq or Afghanistan, where the populations have been slower to develop empathy for American forces because they see them as people who send machines to fight a war. But these shortcomings aren't weighty enough to convince politicians or generals to forgo the use of these technologies, and because of that I expect this trend towards the use of robotic weapons will continue.
You note the development of a new kind of robotic weapon, the SGR-A1, which is now being used by South Korea to patrol its border with North Korea. What distinguishes the SGR-A1 from previous weapons of information warfare?
Taddeo: The main difference is that this machine doesn't necessarily have a human operator, or a "man in the loop" as some have phrased it. It can autonomously decide to fire on a target without having to wait for a signal from a remote operator. In the past drones have been tele-operated, or if not, they didn't possess firing ability, and so there was no immediate risk that one of these machines could autonomously harm a human being. The fact that weapons like the SGR-A1 now exist tells us that there are questions that we need to confront. It's wonderful that we're able to save human lives on one side, our side, of a conflict, but the issues of responsibility, the issue of who is responsible for the actions of these semi-autonomous machines remain to be addressed.
Of course it's hard to develop a general rule for these situations where you have human nature filtered through the actions of these machines; it's more likely we're going to need a case-by-case approach. But whatever we do, we want to push as much of the responsibility as we can into the human sphere.
In your paper you say that information warfare is a compelling case of a larger shift toward the non-physical domain brought about by the Information Revolution. What do you mean by that?
Taddeo: It might make things more clear to start with the Information Revolution. The phrase "Information Revolution" is meant to convey the extraordinary ways that information communication technologies have changed our lives. There are of course plenty of examples of this, including Facebook and Twitter and that sort of thing, but what these technologies have really done is introduce a new non-physical space that we exist in, and, increasingly, it's becoming just as important as the offline or physical space---in fact events in this non-physical domain often affect events in the physical world.
Information warfare is one way that you can see the increasing importance of this non-physical domain. For example, we are now using this non-physical space to prove the power of our states---we are no longer only concerned with demonstrating the authority of our states only in the physical world.
In what ways might information warfare increase the risk of conflicts and human casualties?
Taddeo: It's a tricky question, because the risks aren't yet clear, but there is a worry that the number of conflicts around the world could increase because it will be easier for those who direct military attacks with the use of these technologies to do so, because they will not have to endanger the lives of their citizens to do so. As I mentioned before, information warfare is in this sense easier to wage from a political perspective.
It's more difficult to determine the effect on casualties. Information Warfare has the potential to be blood-free, but that's only one potentiality; this technology could just as easily be used to produce the kind of damage caused by a bomb or any other traditional weapon---just imagine what would happen if a cyber-attack was launched against a flight control system or a subway system. These dangerous aspects of information warfare shouldn't be underestimated; the deployment of information technology in warfare scenarios can be highly dangerous and destructive, and so there's no way to properly quantify the casualties that could result. This is one reason why we so badly need a philosophical and ethical analysis of this phenomenon, so that we can properly evaluate the risks.
This is an actual graphic that ran in Airman Magazine, the official magazine of the Air Force.
Part of your conception of information warfare is as an outgrowth of the Information Revolution. You draw on the work of Luciano Floridi, who has said that the Information Revolution is the fourth revolution, coming after the Copernican, Darwinian and the Freudian revolutions, which all changed the way humans perceive themselves in the Universe. Did those revolutions change warfare in interesting ways?
Taddeo: That's an interesting question. I don't think those revolutions had the kind of impact on warfare that we're seeing with the Information Revolution. Intellectual and technological revolutions seem to go hand in hand, historically, but I don't, to use one example, think that the Freudian Revolution had a dramatic effect on warfare. The First World War was waged much like the wars of the 19th century, and to the extent that it wasn't, those changes did not come about because of Freud.
What you find when you study those revolutions is that while they may have resulted in new technologies like the machine gun or the airplane, none of them changed the concept of war. Even the Copernican Revolution, which was similar to the Information Revolution in the sense that it dislocated our sense of ourselves as existing in a particular space and time, didn't have this effect. The concept of war remained intact in the wake of those revolutions, whereas we are finding that the concept of war itself is changing as a result of the Information Revolution.
How has the Information Revolution changed the concept of war?
Taddeo: It goes back to the shift to the non-physical domain; war has always been perceived as something distinctly physical involving bloodshed and destruction and violence, all of which are very physical types of phenomena. If you talk to people who have participated in warfare, historically, they will describe the visceral effects of it---seeing blood, hearing loud noises, shooting a gun, etc. Warfare was, in the past, always something very concrete.
This new kind of warfare is non-physical; of course it can still cause violence, but it can also be computer to computer, or it can be an attack on certain types of information infrastructure and still be an act of war. Consider the Estonian cyber-attack, where you had a group of actors launching an attack on institutional websites in Estonia; there were no physical casualties, there was no physical violence involved. Traditional war was all about violence; the entire point of it was to physically overpower your enemy. That's a major change. It shifts the ethical analysis, which was previously focused only on minimizing bloodshed. But when you have warfare that doesn't lead to any bloodshed, what sort of ethical framework are you going to apply?
For some time now, Just War Theory has been one of the main ethical frameworks for examining warfare. You seem to argue that its modes of analysis break down when applied to information warfare. For instance, you note that the principle that war ought only to be pursued "as a last resort" may not apply to information warfare. Why is that?
Taddeo: Well first I would say that as an ethical framework Just War Theory has served us well up to this point. It was first developed by the Romans, and from Aquinas on many of the West's brightest minds have contributed to it. It's not that it needs to be discarded; quite the contrary, there are some aspects of it that need to be kept as guiding principles going forward. Still, it's a theory that addresses warfare as it was known historically, as something very physical.
The problem with the principle of "last resort" is that while, yes, we want physical warfare to be the last choice after everything else, it might not be the case that information warfare is to be a "last resort," because it might actually prevent bloodshed in the long run. Suppose that a cyber-attack could prevent traditional warfare from breaking out between two nations; by the criteria of Just War Theory it would be an act of war and thus only justifiable as a last resort. And so you might not want to apply the Just War framework to warfare that is not physically violent.
You also note that the distinction between combatants and civilians is blurred in information warfare, and that this also has consequences for Just War Theory, which makes liberal use of that distinction. How so?
Taddeo: Well until a century ago there was a clear-cut distinction between the military and civilians---you either wear a uniform or you don't, and if you do, you are a justifiable military target. This distinction has been eroded over time, even prior to the Information Revolution; civilians took part in a number of twentieth century conflicts. But with information warfare the distinction is completely gone; not only can a regular person wage information warfare with a laptop, but also a computer engineer working for the U.S. government or the Russian government can participate in information warfare all day long and then go home and have dinner with his or her family, or have a beer at the pub.
The problem is, if we don't have any criteria, any way of judging who is involved in a war and who is not, then how do we respond? Who do we target? The risk is that our list of targets could expand to include people who we would now consider civilians, and that means targeting them with physical warfare, but also with surveillance, and that could be very problematic. Surveillance is a particularly thorny issue here, because if we don't know who we have to observe, we may end up scaling up our surveillance efforts to encompass entire populations and that could have very serious effects in the realm of individual rights.
You have identified the prevention of information entropy as a kind of first principle in an ethical framework that can be applied to information warfare---is that right, and if so, does that supplant the saving of human life as our usual first principle for thinking about these things?
Taddeo: I think they are complimentary. First of all, a clarification is in order. Information entropy has nothing to do with physics or information theory; it's not a physical or mathematical concept. Entropy here refers to the destruction of informational entities, which is something we don't want. It could be anything from destroying a beautiful painting, to launching a virus that damages information infrastructure, and it can also be killing a human being. Informational entities are not only computers; informational entities identify all existing things, seen from an informational perspective. In this sense an action generating entropy in the universe is an action that destroys, damages or corrupts a beautiful painting or damages information infrastructures, and it can also be killing a human being. Any action that makes the information environment worse off generates entropy and therefore is immoral. In this sense the prevention of information entropy is consistent with the saving of human life, because human beings contribute a great deal to the infosphere---killing a human being would generate a lot of information entropy.
This is all part of a wider ethical framework called Information Ethics, mainly developed by Luciano Floridi. Information Ethics ascribes a moral stance to all existing things. It does not have an ontological bias, that is to say it doesn't privilege certain sorts of beings. This does not mean that according to Information Ethics all things have the 'same' moral value but rather that they 'share' some common minimal rights and deserve some minimal respect. Here, the moral value of a particular entity would be proportional to its contributions to the information environment. So a white paper with one dot on it would have less moral value than say a book of poems, or a human being. That's one way of thinking about this.
Republicans are split on how to balance broad participation against the efficient functioning of the institution.
In 1910, the Republican Party was in crisis. Ray Stannard Baker posed the question, “Is the Republican Party Breaking Up?” in the pages of The American Magazine. Baker described a struggle between the “most unyielding of the Regulars” and those the party leaders dismissed as “a factional disturbance to be crushed out … mutineers.” Locked in mortal battle, the Republicans fractured in 1912, losing both the White House and the Congress to Democrats.
It would seem from watching the current maelstrom within the House Republican Conference that history is repeating itself. As Yogi Berra might have put it: “déjà vu all over again.”
“We should be fighting the Democrats—not the Republicans,” Tea Party leader Raúl Labrador declared. “We shouldn't be fighting each other.” But the rebellion against House Speaker John Boehner, the inability to legislate, and the unanticipated implosion of Kevin McCarthy all suggest a party wracked by division and self-doubt.
Before it became the New World, the Western Hemisphere was vastly more populous and sophisticated than has been thought—an altogether more salubrious place to live at the time than, say, Europe. New evidence of both the extent of the population and its agricultural advancement leads to a remarkable conjecture: the Amazon rain forest may be largely a human artifact
The plane took off in weather that was surprisingly cool for north-central Bolivia and flew east, toward the Brazilian border. In a few minutes the roads and houses disappeared, and the only evidence of human settlement was the cattle scattered over the savannah like jimmies on ice cream. Then they, too, disappeared. By that time the archaeologists had their cameras out and were clicking away in delight.
Below us was the Beni, a Bolivian province about the size of Illinois and Indiana put together, and nearly as flat. For almost half the year rain and snowmelt from the mountains to the south and west cover the land with an irregular, slowly moving skin of water that eventually ends up in the province's northern rivers, which are sub-subtributaries of the Amazon. The rest of the year the water dries up and the bright-green vastness turns into something that resembles a desert. This peculiar, remote, watery plain was what had drawn the researchers' attention, and not just because it was one of the few places on earth inhabited by people who might never have seen Westerners with cameras.
No defensible moral framework regards foreigners as less deserving of rights than people born in the right place at the right time.
To paraphrase Rousseau, man is born free, yet everywhere he is caged. Barbed-wire, concrete walls, and gun-toting guards confine people to the nation-state of their birth. But why? The argument for open borders is both economic and moral. All people should be free to move about the earth, uncaged by the arbitrary lines known as borders.
Not every place in the world is equally well-suited to mass economic activity. Nature’s bounty is divided unevenly. Variations in wealth and income created by these differences are magnified by governments that suppress entrepreneurship and promote religious intolerance, gender discrimination, or other bigotry. Closed borders compound these injustices, cementing inequality into place and sentencing their victims to a life of penury.
Science says lasting relationships come down to—you guessed it—kindness and generosity.
Every day in June, the most popular wedding month of the year, about 13,000 American couples will say “I do,” committing to a lifelong relationship that will be full of friendship, joy, and love that will carry them forward to their final days on this earth.
Except, of course, it doesn’t work out that way for most people. The majority of marriages fail, either ending in divorce and separation or devolving into bitterness and dysfunction. Of all the people who get married, only three in ten remain in healthy, happy marriages, as psychologist Ty Tashiro points out in his book The Science of Happily Ever After, which was published earlier this year.
Social scientists first started studying marriages by observing them in action in the 1970s in response to a crisis: Married couples were divorcing at unprecedented rates. Worried about the impact these divorces would have on the children of the broken marriages, psychologists decided to cast their scientific net on couples, bringing them into the lab to observe them and determine what the ingredients of a healthy, lasting relationship were. Was each unhappy family unhappy in its own way, as Tolstoy claimed, or did the miserable marriages all share something toxic in common?
The standard conception of the disorder is based on studies of "hyperactive young white boys." For females, it comes on later, and has different symptoms.
When you live in total squalor—cookies in your pants drawer, pants in your cookies drawer, and nickels, dresses, old New Yorkers, and apple seeds in your bed—it’s hard to know where to look when you lose your keys. The other day, after two weeks of fruitless searching, I found my keys in the refrigerator on top of the roasted garlic hummus. I can’t say I was surprised. I was surprised when my psychiatrist diagnosed me with ADHD two years ago, when I was a junior at Yale.
In editorials and in waiting rooms, concerns of too-liberal diagnoses and over-medication dominate our discussions of attention deficit hyperactivity disorder, or ADHD. The New York Timesrecently reported, with great alarm, the findings of a new Centers for Disease Control and Prevention study: 11 percent of school-age children have received an ADHD diagnosis, a 16 percent increase since 2007. And rising diagnoses mean rising treatments—drugs like Adderall and Ritalin are more accessible than ever, whether prescribed by a physician or purchased in a library. The consequences of misuse and abuse of these drugs are dangerous, sometimes fatal.
Is there anything inherently “doggy” about the word “dog”? Obviously not—to the French, a dog is a chien, to Russians a sobaka, to Mandarin Chinese-speakers a gǒu. These words have nothing in common, and none seem any more connected to the canine essence than any other. One runs up against that wall with pretty much any word.
Except some. The word for “mother” seems often either to be mama or have a nasal sound similar to m, like nana. The word for “father” seems often either to be papa or have a sound similar to p, like b, in it—such that you get something like baba. The word for “dad” may also have either d or t, which is a variation on saying d, just as p is on b. People say mama or nana, and then papa, baba, dada, or tata,worldwide.
American politicians are now eager to disown a failed criminal-justice system that’s left the U.S. with the largest incarcerated population in the world. But they've failed to reckon with history. Fifty years after Daniel Patrick Moynihan’s report “The Negro Family” tragically helped create this system, it's time to reclaim his original intent.
By his own lights, Daniel Patrick Moynihan, ambassador, senator, sociologist, and itinerant American intellectual, was the product of a broken home and a pathological family. He was born in 1927 in Tulsa, Oklahoma, but raised mostly in New York City. When Moynihan was 10 years old, his father, John, left the family, plunging it into poverty. Moynihan’s mother, Margaret, remarried, had another child, divorced, moved to Indiana to stay with relatives, then returned to New York, where she worked as a nurse. Moynihan’s childhood—a tangle of poverty, remarriage, relocation, and single motherhood—contrasted starkly with the idyllic American family life he would later extol.
When M.S. was 13, her math teacher at Edison middle school in Los Angeles invited her to be friends online. Soon the 8th grader was receiving sexually explicit messages. That winter, she was called into a classroom and told to shut the door. The teacher, Elkis Hermida, kissed and hugged the student. In March, he drove M.S. (as she’s referred to in court records, to protect her privacy), then 14, to a motel, where they had sexual intercourse. Another time, he rearranged furniture in his classroom and had sex with the girl right there.
When they had intercourse a third time, at a motel, Hermida told M.S. that they were not in a relationship—they were just having sex. At that point, M.S. “wanted to stop having sexual intercourse with Hermida, but did not feel that she was free to do so,” a California appeals court stated. At their next encounter, the teacher wanted anal sex. M.S. objected. “Hermida inserted something into her anus anyway,” the court said.
An influential journalist who supports the presidential candidate offers an unusually naked defense of her ends-justify-the-means approach to public life.
An influential progressive writer published a blunt assessment of Hillary Clinton this week, declaring her unusually willing to transgress against civic and legal standards.
“From her adventures in cattle trading to chairing a policymaking committee in her husband's White House to running for Senate in a state she’d never lived in to her effort to use superdelegates to overturn 2008 primary results to her email servers,” Matthew Yglesias declared at Vox.com, “Clinton is clearly more comfortable than the average person with violating norms and operating in legal gray areas.”
He goes on to flesh out those examples and to offer still more:
There was no winnable Senate race for her to enter in Illinois or Arkansas in 2000, so she ran in New York instead. Barack Obama forbade her from employing Sidney Blumenthal at the State Department, so she employed him at her family's foundation instead. Sandy Berger faced criminal penalties for destroying classified documents at the National Archives, but that didn't stop Clinton from informally employing him as an adviser on sensitive Middle East peace negotiations.
She decides what she wants to do, in other words, and then she sets about finding a way to do it...
Astronomers have spotted a strange mess of objects whirling around a distant star. Scientists who search for extraterrestrial civilizations are scrambling to get a closer look.
In the Northern hemisphere’s sky, hovering above the Milky Way, there are two constellations—Cygnus the swan, her wings outstretched in full flight, and Lyra, the harp that accompanied poetry in ancient Greece, from which we take our word “lyric.”
Between these constellations sits an unusual star, invisible to the naked eye, but visible to the Kepler Space Telescope, which stared at it for more than four years, beginning in 2009.
“We’d never seen anything like this star,” says Tabetha Boyajian, a postdoc at Yale. “It was really weird. We thought it might be bad data or movement on the spacecraft, but everything checked out.”
Kepler was looking for tiny dips in the light emitted by this star. Indeed, it was looking for these dips in more than 150,000 stars, simultaneously, because these dips are often shadows cast by transiting planets. Especially when they repeat, periodically, as you’d expect if they were caused by orbiting objects.