Weapons systems that fire autonomously are the apotheosis of elites insulating themselves from accountability.
The aversion I have to autonomous weapons dates back to a bygone afternoon when I sat playing Golden Eye, a James Bond-themed video game that I was progressing through with all the alacrity of its lead character until I reached the jungle level. Expert as I was at evading enemy soldiers, I found myself gunned down in a spray of machine-gun bullets, which turned out to be motion activated. "Oh bollocks," I cursed, determined to stay in character. "That's hardly sporting."
I suppose some of you will think I'm a nutty conspiracy theorist when I inform you that many inside the Defense Department are eager to deploy machines on the battlefield that autonomously kill rather than requiring human intervention to "pull the trigger." It sounds like dystopian fiction. But it has its champions, and most informed observers think it's inevitable that they'll win victories in coming years. There aren't even many objections to autonomous military hardware that doesn't kill: Think of drones that take off, surveil, and land entirely on autopilot. Is the day coming when drones like that are armed, programmed to seek out certain sorts of images, and automated to fire in certain circumstances? The short answer is that, given present trends, it's a realistic possibility.
Noel Sharkey is shaken and stirred -- and he isn't alone. As he recently noted in The Guardian, Human Rights Watch is so concerned that they're calling on international actors to immediately "prohibit the development, production and use of fully autonomous weapons through an international legally binding instrument; and adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons." Would that President Obama listened.
The same Guardian column flags a quotation from a recent Pentagon directive on autonomous weapons that is worth bearing in mind. The lethal machines of the future could conceivable fail due to "a number of causes," the directive states, "including but not limited to human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, or unanticipated situations on the battlefield."
The arguments against adopting this technology nearly make themselves. So I'll focus on a tangential but important observation.
In recent decades, America's elite -- its elected officials, bureaucrats, and CEOs, for starters -- have succeeded spectacularly at insulating themselves from responsibility for their failures. As the global economy melted down, Wall Street got bailouts. CEOs who preside over shrinking companies still depart with "golden parachute" severance packages. The foreign-policy establishment remains virtually unchanged despite the catastrophic errors so many of its members made during the Iraq War. The conservative movement is failing to advance its ideas or its political prospects, yet its institutional leaders and infotainment personalities are as profitable as ever. It is a self-evidently pernicious trend: Once someone achieves insider status, their future success is less and less dependent on how competently and responsibly they perform.
An innocent who dies at the handlessness of an automated killing machine? How easy to phrase the obligatory apology in the passive voice!
It's no wonder that some military leaders are so eager for the advent of autonomous weapons. At present, if WikiLeaks gets ahold of a video that shows innocents being fired upon, the incident in question can be traced back to an individual triggerman, an officer who gave him orders, and perhaps particular people who provided them with faulty intelligence. But an innocent who dies at the handlessness of an automated killing machine? How easy to phrase the obligatory apology in the passive voice! How implausible that any individual would be held culpable for the failure!
And would there ever be accusations that a supposed mistake was actually an intentional killing being conveniently blamed on autonomy gone wrong? I hardly think such suggestions would come from within the establishment. Levying that sort of accusation against a sitting president or the "brave men and women of the United States military" is precisely the sort of thing that bipartisan Washington, D.C., culture declares beyond the pale of reasonable discourse. And when autonomous Chinese drones "accidently" fire upon some Tibetans? Even a genuine accident in America's past would make it that much harder to refrain from giving the Chinese the benefit of the doubt that we ourselves had requested.
Most areas of American culture would benefit if the relevant set of elites were more accountable, but nowhere is in individual accountability more important than when death is being meted out. The secrecy that surrounds the national-security establishment and outsourcing drone strikes to the CIA already introduces a problematic lack of accountability to the War on Terrorism. Autonomous killing machines would exacerbate the problem more than anything else I can imagine.
Conor Friedersdorf is a staff writer at The Atlantic, where he focuses on politics and national affairs. He lives in Venice, California, and is the founding editor of The Best of Journalism, a newsletter devoted to exceptional nonfiction.
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health.
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
The drug modafinil was recently found to enhance cognition in healthy people. Should you take it to get a raise?
If you could take a pill that will make you better at your job, with few or no negative consequences, would you do it?
In a meta-analysis recently published in European Neuropsychopharmacology, researchers from the University of Oxford and Harvard Medical School concluded that a drug called modafinil, which is typically used to treat sleep disorders, is a cognitive enhancer. Essentially, it can help normal people think better.
Out of all cognitive processes, modafinil was found to improve decision-making and planning the most in the 24 studies the authors reviewed. Some of the studies also showed gains in flexible thinking, combining information, or coping with novelty. The drug didn’t seem to influence creativity either way.
All of the downsides of being a subordinate, combined with all of the downsides of having to tell people to do things they don't want to do.
When researchers try to determine the types of workers who are most prone to depression, the focus is usually on the misery of those at the bottomof a company’s hierarchy—the presumed stressors being the menial duties they're tasked with and their lack of say in defining the scope of their jobs.
But it turns out that middle managers have it worse. In a new study from researchers at Columbia University, of nearly 22,000 full-time workers (from a dataset from the National Epidemiological Survey on Alcohol and Related Conditions), they saw that 18 percent of supervisors and managers reported symptoms of depression. For blue-collar workers, that figure was 12 percent, and for owners and executives, it was only 11 percent.
Four and a half years of violent conflict have destroyed entire regions of Syria. Caught in the middle of all this horror are the children of Syria, relying on parents who have lost control of their own lives and are now being forced to make difficult choices in desperate circumstances.
Four and a half years of violent conflict have destroyed entire regions of Syria. Neighborhoods have been smashed by shelling and government barrel bombs, and towns have been seized by rebels and ISIS militants, then retaken by government troops, killing hundreds of thousands and injuring even more. The United Nations now estimates that more than 4 million Syrians have become refugees, forced to flee to neighboring countries or Europe. Caught in the middle of all this horror are the children of Syria, relying on parents who have lost control of their own lives and are now being forced to make difficult choices in desperate circumstances. Though many families remain in Syria’s war zones, thousands of others are taking dangerous measures to escape, evading militias, government forces, border guards, predatory traffickers, and more, as they struggle to reach safety far from home.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
A new study shows that the field suffers from a reproducibility problem, but the extent of the issue is still hard to nail down.
No one is entirely clear on how Brian Nosek pulled it off, including Nosek himself. Over the last three years, the psychologist from the University of Virginia persuaded some 270 of his peers to channel their free time into repeating 100 published psychological experiments to see if they could get the same results a second time around. There would be no glory, no empirical eurekas, no breaking of fresh ground. Instead, this initiative—the Reproducibility Project—would be the first big systematic attempt to answer questions that have been vexing psychologists for years, if not decades. What proportion of results in their field are reliable?
Yanis Varoufakis on Grexit, the media, and economics
When Yanis Varoufakis was elected to parliament and then named as Greek finance minister in January, he embarked on an extraordinary seven months of negotiations with the country’s creditors and its European partners.
On July 6, Greek voters backed his hardline stance in a referendum, with a resounding 62 percent voting No to the European Union’s ultimatum. On that night, he resigned, after Prime Minister Alexis Tsipras, fearful of an ugly exit from the euro zone, decided to go against the popular verdict. Since then, the governing party, Syriza, has splintered and a snap election has been called. Varoufakis remains a member of parliament and a prominent voice in Greek and European politics.
It is not too late to strengthen the Iran deal, a prominent critic says.
It appears likely, as of this writing, that Barack Obama will be victorious in his fight to implement the Iran nuclear deal negotiated by his secretary of state, John Kerry. Republicans in Congress don’t appear to have the votes necessary to void the agreement, and Benjamin Netanyahu’s campaign to subvert Obama may be remembered as one of the more counterproductive and shortsighted acts of an Israeli prime minister since the rebirth of the Jewish state 67 years ago.
Things could change, of course, and the Iranian regime, which is populated in good part by extremists, fundamentalist theocrats, and supporters of terrorism, could do something monumentally stupid in the coming weeks that could force on-the-fence Democrats to side with their Republican adversaries (remember the Café Milano fiasco, anyone?). But, generally speaking, the Obama administration, and its European allies, seem to have a clearer path to implementation than they had at the beginning of the month.
But no tale of posthumous success is quite as spectacular as that of Howard Phillips Lovecraft, the “cosmic horror” writer who died in Providence, Rhode Island, in 1937 at the age of 46. The circumstances of Lovecraft’s final years were as bleak as anyone’s. He ate expired canned food and wrote to a friend, “I was never closer to the bread-line.” He never saw his stories collectively published in book form, and, before succumbing to intestinal cancer, he wrote, “I have no illusions concerning the precarious status of my tales, and do not expect to become a serious competitor of my favorite weird authors.” Among the last words the author uttered were, “Sometimes the pain is unbearable.” His obituary in the Providence Evening Bulletin was “full of errors large and small,” according to his biographer.
A new study finds an algorithmic word analysis is flawless at determining whether a person will have a psychotic episode.
Although the language of thinking is deliberate—let me think, I have to do some thinking—the actual experience of having thoughts is often passive. Ideas pop up like dandelions; thoughts occur suddenly and escape without warning. People swim in and out of pools of thought in a way that can feel, paradoxically, mindless.
Most of the time, people don’t actively track the way one thought flows into the next. But in psychiatry, much attention is paid to such intricacies of thinking. For instance, disorganized thought, evidenced by disjointed patterns in speech, is considered a hallmark characteristic of schizophrenia. Several studies of at-risk youths have found that doctors are able to guess with impressive accuracy—the best predictive models hover around 79 percent—whether a person will develop psychosis based on tracking that person’s speech patterns in interviews.