From state-sponsored cyber attacks to autonomous robotic weapons, twenty-first century war is increasingly disembodied. Our wars are being fought in the ether and by machines. And yet our ethics of war are stuck in the pre-digital age.
We're used to thinking of war as a physical phenomenon, as an outbreak of destructive violence that takes place in the physical world. Bullets fly, bombs explode, tanks roll, people collapse. Despite the tremendous changes in the technology of warfare, it remained a contest of human bodies. But as the drone wars have shown, that's no longer true, at least for one side of the battle.
Technological asymmetry has always been a feature of warfare, but no
nation has ever been able to prosecute a war without any physical risk
to its citizens. What might the ability to launch casualty-free wars do
to the political barriers that stand between peace and conflict? In
today's democracies politicians are obligated to explain, at regular
intervals, why a military action requires the blood of a nation's young
people. Wars waged by machines might not encounter much skepticism in
the public sphere.
We just don't know what moral constraints should apply to these new kinds of warfare. Take the ancient, but still influential, doctrine of Just War
Theory, which requires that war's destructive forces be unleashed only
when absolutely necessary; war is to be pursued only as a last resort
and only against combatants, never against civilians.
But information warfare, warfare pursued with information technologies, distorts concepts like "necessity" and "civilian" in ways that challenge these ethical frameworks. An attack on another nation's information infrastructure, for instance, would surely count as an act of war. But what if it reduced the risk of future bloodshed? Should we really only consider it as a last resort? The use of robots further complicates things. It's not yet clear who should be held responsible if and when an autonomous military robot kills a civilian.
These are the questions that haunt the philosophers and ethicists that think deeply about information warfare, and they will only become more pertinent as our information technologies become more sophisticated. Mariarosaria Taddeo, a Marie Curie Fellow at the University of Hertforshire, recently published an article in Philosophy & Technology called "Information Warfare: A Philosophical Perspective" that addresses these questions and more. What follows is my conversation with Taddeo about how information technology is changing the way we wage war, and what philosophy is doing to catch up.
How do you define information warfare?
Taddeo: The definition of "information warfare" is hotly debated. From my perspective, for the purposes of philosophical analysis, it's best to define information warfare in terms of concrete forms, and then see if there is a commonality between those forms. One example would be cyber-attacks or hacker attacks, which we consider to be information warfare; another example would be the use of drones or semi-autonomous machines. From those instances, to me, a good definition of information warfare is "the use of information communication technologies within a military strategy that is endorsed by a state." And if you go to the Pentagon they will speak about this in different ways, they put it under different headings, in terms of information operations or cyber warfare, cyber attacks, that sort of thing.
Was Russia's attack on Estonia in 2007 the first broad-based state example of this?
Taddeo: The attack on Estonia is certainly one example of it, but it's only one instance, and it's not the first. You could, for example, point to the SWORDS robots that were used in Iraq several years prior to the attack on Estonia, or the use of predator drones, etc. Remember information warfare encompasses more than only information communication technologies used through the web; these technologies can be used in several different domains and in several different ways.
But it's hard to point to a definitive first example of this. It goes back quite a ways and these technologies have been evolving for sometime now; remember that the first Internet protocols were developed by DARPA---in some sense, these technologies were born in the military sphere. Turing himself, the father of computer science, worked mainly within military programs during the Second World War.
Interesting, but do I understand you correctly that you distinguish this new kind of information warfare from pre-internet information technologies like the radio and the telegraph?
Taddeo: Well those are certainly information technologies, and to some extent information has always been an important part of warfare, because we have always wanted to communicate and to destroy our enemies' information structures and communication capabilities. What we want to distinguish here is the use of these new kinds of information communication technologies, because they have proved to be much more revolutionary in their effects on warfare than previous technologies like telegraphs or telephones or radios or walkie-talkies.
What's revolutionary about them is that they have restructured the very reality in which we perceive ourselves as living in, and the way in which we think about the concepts of warfare or the state. Take for example the concept of the state: we currently define a state as a political unit that exercises power over a certain physical territory. But when you consider that states are now trying to also dominate certain parts of cyberspace, our definition becomes problematic because cyberspace doesn't have a defined territory. The information revolution is shuffling these concepts around in really interesting ways from a philosophical perspective, and more specifically, from an ethical perspective.
An Israeli soldier carries a drone. Reuters.
In your paper you mention the use of robotic weapons like drones as one example of the rapid development of information warfare. You note that the U.S. government deployed only 150 robotic weapons in Iraq in 2004, but that number had grown to 12,000 by 2008. Is this a trend you expect to continue?
Taddeo: I expect so. There are several ways that the political decisions to endorse or deploy these machines are encouraged by the nature of these technologies. For one they are quite a bit cheaper than traditional weapons, but more importantly they bypass the need for political actors to confront media and public opinion about sending young men and women abroad to risk their lives. These machines enable the contemplation of military operations that would have previously been considered too dangerous for humans to undertake. From a political and military perspective, the advantages of these weapons outweigh the disadvantages quite heavily.
But there are interesting problems that surface when you use them; for instance, when you have robots fighting a war in a foreign country, the population of that country is going to be slow to gain trust, which can make occupation or even just persuasion quite difficult. You can see this in Iraq or Afghanistan, where the populations have been slower to develop empathy for American forces because they see them as people who send machines to fight a war. But these shortcomings aren't weighty enough to convince politicians or generals to forgo the use of these technologies, and because of that I expect this trend towards the use of robotic weapons will continue.
You note the development of a new kind of robotic weapon, the SGR-A1, which is now being used by South Korea to patrol its border with North Korea. What distinguishes the SGR-A1 from previous weapons of information warfare?
Taddeo: The main difference is that this machine doesn't necessarily have a human operator, or a "man in the loop" as some have phrased it. It can autonomously decide to fire on a target without having to wait for a signal from a remote operator. In the past drones have been tele-operated, or if not, they didn't possess firing ability, and so there was no immediate risk that one of these machines could autonomously harm a human being. The fact that weapons like the SGR-A1 now exist tells us that there are questions that we need to confront. It's wonderful that we're able to save human lives on one side, our side, of a conflict, but the issues of responsibility, the issue of who is responsible for the actions of these semi-autonomous machines remain to be addressed.
Of course it's hard to develop a general rule for these situations where you have human nature filtered through the actions of these machines; it's more likely we're going to need a case-by-case approach. But whatever we do, we want to push as much of the responsibility as we can into the human sphere.
In your paper you say that information warfare is a compelling case of a larger shift toward the non-physical domain brought about by the Information Revolution. What do you mean by that?
Taddeo: It might make things more clear to start with the Information Revolution. The phrase "Information Revolution" is meant to convey the extraordinary ways that information communication technologies have changed our lives. There are of course plenty of examples of this, including Facebook and Twitter and that sort of thing, but what these technologies have really done is introduce a new non-physical space that we exist in, and, increasingly, it's becoming just as important as the offline or physical space---in fact events in this non-physical domain often affect events in the physical world.
Information warfare is one way that you can see the increasing importance of this non-physical domain. For example, we are now using this non-physical space to prove the power of our states---we are no longer only concerned with demonstrating the authority of our states only in the physical world.
In what ways might information warfare increase the risk of conflicts and human casualties?
Taddeo: It's a tricky question, because the risks aren't yet clear, but there is a worry that the number of conflicts around the world could increase because it will be easier for those who direct military attacks with the use of these technologies to do so, because they will not have to endanger the lives of their citizens to do so. As I mentioned before, information warfare is in this sense easier to wage from a political perspective.
It's more difficult to determine the effect on casualties. Information Warfare has the potential to be blood-free, but that's only one potentiality; this technology could just as easily be used to produce the kind of damage caused by a bomb or any other traditional weapon---just imagine what would happen if a cyber-attack was launched against a flight control system or a subway system. These dangerous aspects of information warfare shouldn't be underestimated; the deployment of information technology in warfare scenarios can be highly dangerous and destructive, and so there's no way to properly quantify the casualties that could result. This is one reason why we so badly need a philosophical and ethical analysis of this phenomenon, so that we can properly evaluate the risks.
This is an actual graphic that ran in Airman Magazine, the official magazine of the Air Force.
Part of your conception of information warfare is as an outgrowth of the Information Revolution. You draw on the work of Luciano Floridi, who has said that the Information Revolution is the fourth revolution, coming after the Copernican, Darwinian and the Freudian revolutions, which all changed the way humans perceive themselves in the Universe. Did those revolutions change warfare in interesting ways?
Taddeo: That's an interesting question. I don't think those revolutions had the kind of impact on warfare that we're seeing with the Information Revolution. Intellectual and technological revolutions seem to go hand in hand, historically, but I don't, to use one example, think that the Freudian Revolution had a dramatic effect on warfare. The First World War was waged much like the wars of the 19th century, and to the extent that it wasn't, those changes did not come about because of Freud.
What you find when you study those revolutions is that while they may have resulted in new technologies like the machine gun or the airplane, none of them changed the concept of war. Even the Copernican Revolution, which was similar to the Information Revolution in the sense that it dislocated our sense of ourselves as existing in a particular space and time, didn't have this effect. The concept of war remained intact in the wake of those revolutions, whereas we are finding that the concept of war itself is changing as a result of the Information Revolution.
How has the Information Revolution changed the concept of war?
Taddeo: It goes back to the shift to the non-physical domain; war has always been perceived as something distinctly physical involving bloodshed and destruction and violence, all of which are very physical types of phenomena. If you talk to people who have participated in warfare, historically, they will describe the visceral effects of it---seeing blood, hearing loud noises, shooting a gun, etc. Warfare was, in the past, always something very concrete.
This new kind of warfare is non-physical; of course it can still cause violence, but it can also be computer to computer, or it can be an attack on certain types of information infrastructure and still be an act of war. Consider the Estonian cyber-attack, where you had a group of actors launching an attack on institutional websites in Estonia; there were no physical casualties, there was no physical violence involved. Traditional war was all about violence; the entire point of it was to physically overpower your enemy. That's a major change. It shifts the ethical analysis, which was previously focused only on minimizing bloodshed. But when you have warfare that doesn't lead to any bloodshed, what sort of ethical framework are you going to apply?
For some time now, Just War Theory has been one of the main ethical frameworks for examining warfare. You seem to argue that its modes of analysis break down when applied to information warfare. For instance, you note that the principle that war ought only to be pursued "as a last resort" may not apply to information warfare. Why is that?
Taddeo: Well first I would say that as an ethical framework Just War Theory has served us well up to this point. It was first developed by the Romans, and from Aquinas on many of the West's brightest minds have contributed to it. It's not that it needs to be discarded; quite the contrary, there are some aspects of it that need to be kept as guiding principles going forward. Still, it's a theory that addresses warfare as it was known historically, as something very physical.
The problem with the principle of "last resort" is that while, yes, we want physical warfare to be the last choice after everything else, it might not be the case that information warfare is to be a "last resort," because it might actually prevent bloodshed in the long run. Suppose that a cyber-attack could prevent traditional warfare from breaking out between two nations; by the criteria of Just War Theory it would be an act of war and thus only justifiable as a last resort. And so you might not want to apply the Just War framework to warfare that is not physically violent.
You also note that the distinction between combatants and civilians is blurred in information warfare, and that this also has consequences for Just War Theory, which makes liberal use of that distinction. How so?
Taddeo: Well until a century ago there was a clear-cut distinction between the military and civilians---you either wear a uniform or you don't, and if you do, you are a justifiable military target. This distinction has been eroded over time, even prior to the Information Revolution; civilians took part in a number of twentieth century conflicts. But with information warfare the distinction is completely gone; not only can a regular person wage information warfare with a laptop, but also a computer engineer working for the U.S. government or the Russian government can participate in information warfare all day long and then go home and have dinner with his or her family, or have a beer at the pub.
The problem is, if we don't have any criteria, any way of judging who is involved in a war and who is not, then how do we respond? Who do we target? The risk is that our list of targets could expand to include people who we would now consider civilians, and that means targeting them with physical warfare, but also with surveillance, and that could be very problematic. Surveillance is a particularly thorny issue here, because if we don't know who we have to observe, we may end up scaling up our surveillance efforts to encompass entire populations and that could have very serious effects in the realm of individual rights.
You have identified the prevention of information entropy as a kind of first principle in an ethical framework that can be applied to information warfare---is that right, and if so, does that supplant the saving of human life as our usual first principle for thinking about these things?
Taddeo: I think they are complimentary. First of all, a clarification is in order. Information entropy has nothing to do with physics or information theory; it's not a physical or mathematical concept. Entropy here refers to the destruction of informational entities, which is something we don't want. It could be anything from destroying a beautiful painting, to launching a virus that damages information infrastructure, and it can also be killing a human being. Informational entities are not only computers; informational entities identify all existing things, seen from an informational perspective. In this sense an action generating entropy in the universe is an action that destroys, damages or corrupts a beautiful painting or damages information infrastructures, and it can also be killing a human being. Any action that makes the information environment worse off generates entropy and therefore is immoral. In this sense the prevention of information entropy is consistent with the saving of human life, because human beings contribute a great deal to the infosphere---killing a human being would generate a lot of information entropy.
This is all part of a wider ethical framework called Information Ethics, mainly developed by Luciano Floridi. Information Ethics ascribes a moral stance to all existing things. It does not have an ontological bias, that is to say it doesn't privilege certain sorts of beings. This does not mean that according to Information Ethics all things have the 'same' moral value but rather that they 'share' some common minimal rights and deserve some minimal respect. Here, the moral value of a particular entity would be proportional to its contributions to the information environment. So a white paper with one dot on it would have less moral value than say a book of poems, or a human being. That's one way of thinking about this.
The Trump Foundation mostly takes in other people’s money, but it appears it doesn’t have legal permission to solicit donations.
The problem with telling people to follow the money is they just might take you up on it. Donald Trump’s campaign has adopted that mantra in reference to the Clinton Foundation, but it applies to him in uncomfortable ways, too.
First, there’s the fact that he won’t release his tax returns, making it hard to follow the money and raising questions about what might be hidden there. Second, there are his forays into Cuba, apparently in violation of the embargo. Third, there’s the latest scoop from The Washington Post’s David Fahrenthold, who finds that the Donald J. Trump Foundation was operating without a required license.
As Fahrenthold previously reported, the Trump Foundation is peculiar: Unlike many other similar charities, it’s stocked with other people’s money. Trump himself has given barely any money to it since the mid-2000s, although he did direct income from places like Comedy Central to the charity, possibly without paying taxes on it. Instead, he has raised money from other donors, which he has used to, among other things, settle legal cases against him, all while basking in the glow of his apparent charity.
The Commission on Presidential Debates issued a cryptic statement acknowledging some audio issues Monday night.
After critics savaged his performance at Monday’s first presidential debate, Republican nominee Donald Trump alighted on several culprits: Hillary Clinton, the moderator, and especially his microphone.
The claim was met with some skepticism, but on Friday afternoon, the Commission on Presidential Debates seemed to confirm his claim, at least in part. The commission, which controls the debates, released a cryptic statement that reads in full:
Statement about first debate
Sep 30, 2016
Regarding the first debate, there were issues regarding Donald Trump's audio that affected the sound level in the debate hall.
We’ve called the commission to ask what that means, but have not heard back yet. Presumably, they are receiving dozens of such queries.
An etiquette update: Brevity is the highest virtue.
I recently cut the amount of time I spent on email by almost half, and I think a lot of people could do the same.
I’m sure my approach has made some people hate me, because I come off curt. But if everyone thought about email in the same way, what I’m suggesting wouldn’t be rude. Here are the basic guidelines that are working for me and, so, I propose for all of the world to adopt immediately:
Best? Cheers? Thanks?
None of the above. You can write your name if it feels too naked or abrupt not to have something down there. But it shouldn’t, and it wouldn’t if it were the norm.
Don’t waste time considering if “Dear,” or “Hey” or “[name]!” is appropriate. Just get right into it. Write the recipient’s name if you must. But most people already know their names. Like they already know your name.
Terry Spraitz Ciszek, a homemaker in Fayetteville, North Carolina, talks about changing perceptions of women in the traditional economy and those who choose to leave their careers to raise a family.
For many women, the decision of whether or not to go back to work after having a child remains a fraught one. After all, returning to a job after maternity leave often means facing significant workplace challenges and even a decrease in earnings. On the other hand, there is also frequently a stigma attached to women who leave the workforce temporarily to raise their children or become long-term homemakers. Oftentimes, the decision for new mothers to rejoin the workforce can be seen as a reflection of the state of the economy. The number of stay-at-home mothers fell consistently for decades—from 49 percent in 1967 to a low of 23 percent in 1999—before bouncing back to 29 percent in 2012.
The ability for one parent to stay home, for kids or otherwise, is often viewed as a luxury of upper-middle class life. But even for the households that can afford it, the financial implications can extend beyond the loss of one steady income: A hypothetical 26-year-old female worker with a salary of $44,000 a year could lose about $707,000 in lifetime income ($220,000 in income, $265,000 in lifetime wage growth, and $222,000 in retirement benefits) from taking just five years off to care for a child.
Lawmakers overrode an Obama veto for the first time on Wednesday. A day later, they already had regrets.
The enactment on Wednesday of the Justice Against Sponsors of Terrorism Act should have been a triumphant moment for Republican leaders in Congress. They had succeeded, after years of trying, in overriding a presidential veto for the first time and forcing a bill into law over the strenuous objections of Barack Obama.
But the morning after brought no such celebration for HouseSpeaker Paul Ryan and Senate Majority Leader McConnell—only pangs of regret.
“It appears as if there may be some unintended ramifications,” McConnell lamented at a press conference barely 24 hours after all but one senator voted to reject the president’s veto of the legislation, which would allow victims of the September 11, 2001 terrorist attacks to sue Saudi Arabia in U.S. court. On the other side of the Capitol, Ryan said that he hoped there could be a “fix” to the very law he allowed to pass through the House—one that would protect U.S. soldiers abroad from legal retribution that the Obama administration had warned for months would follow as a result of the law.
With the death of Shimon Peres, Israel has lost its chief optimist. And the prime minister remains paralyzed by pessimism.
The Book of Proverbs teaches us that where there is no vision, the people perish. The people of Israel, now bereft of Shimon Peres, will not perish, because survival—or, at least, muddling through—is a Jewish specialty. But the death of Israel’s greatest visionary, a man who understood that it would never be morally or spiritually sufficient for the Jews to build for themselves the perfect ghetto and then wash their hands of the often-merciless world, means that Israel has lost its chief optimist.
Peres was, for so many years, a prophet without honor in his own country, but he was someone who, late in life, came to symbolize Israel’s big-hearted, free-thinking, inventive, and democratic promise. Peres came to this role in part because he had prescience, verbal acuity, a feel for poetry, and a restless curiosity, but also because, gradually but steadily, he became surrounded by small men. One of the distressing realities of Israel today is that, in so many fields—technology, medicine, agriculture, literature, music, cinema—the country is excelling. But to Israeli politics go the mediocrities.
Narcissism, disagreeableness, grandiosity—a psychologist investigates how Trump’s extraordinary personality might shape his possible presidency.
In 2006, Donald Trump made plans to purchase the Menie Estate, near Aberdeen, Scotland, aiming to convert the dunes and grassland into a luxury golf resort. He and the estate’s owner, Tom Griffin, sat down to discuss the transaction at the Cock & Bull restaurant. Griffin recalls that Trump was a hard-nosed negotiator, reluctant to give in on even the tiniest details. But, as Michael D’Antonio writes in his recent biography of Trump, Never Enough, Griffin’s most vivid recollection of the evening pertains to the theatrics. It was as if the golden-haired guest sitting across the table were an actor playing a part on the London stage.
“It was Donald Trump playing Donald Trump,” Griffin observed. There was something unreal about it.
Despite an array of calculating tools, comparing financial-aid packages is still an incredibly dense and circular process.
As almost any parent of a high-school senior knows, figuring out the true college price tag is confusing. While the full annual sticker price can be as much as $60,000 or $70,000 at a private college and more than $55,000 at an out-of-state public college, experts say that many students will end up paying considerably less. Sizable merit and need-based aid packages take the sting out of those big numbers.
Students, however, typically have to wait until the spring, when their acceptance letters arrive, to learn the amount of those awards, making it difficult for families to effectively plan a long-term budget and posing significant obstacles for first-generation students who may not be aware of all the financial options.
Across the country, Republican-leaning papers are breaking with their own history to warn their readers about the GOP nominee.
There is a lot of truth to the stereotype that the American media is centered in New York City and Washington, D.C., staffed by Democrats, and hostile to Republicans. Like other professionals, journalists run the gamut from hugely talented individuals doing great work to hacks producing crap, but journalism is unusual in its dearth of ideological diversity.
Simply by living 3,000 miles from the East Coast, leaning more libertarian than progressive, and opposing President Obama’s reelection, I am an outlier in my field. And neither my upbringing among Republicans I respect deeply nor my many differences with leftism gives me insight into what daily life is like in the vast swaths of the country where I’ve never lived or the many jobs I’ve never worked. So I get why tens of millions of Americans don’t give a damn what distant network news anchors with seven-figure net worths think about this election, or that the New York Times, which always endorses the Democratic nominee, endorsed Hillary Clinton.