Weapons systems that fire autonomously are the apotheosis of elites insulating themselves from accountability.
The aversion I have to autonomous weapons dates back to a bygone afternoon when I sat playing Golden Eye, a James Bond-themed video game that I was progressing through with all the alacrity of its lead character until I reached the jungle level. Expert as I was at evading enemy soldiers, I found myself gunned down in a spray of machine-gun bullets, which turned out to be motion activated. "Oh bollocks," I cursed, determined to stay in character. "That's hardly sporting."
I suppose some of you will think I'm a nutty conspiracy theorist when I inform you that many inside the Defense Department are eager to deploy machines on the battlefield that autonomously kill rather than requiring human intervention to "pull the trigger." It sounds like dystopian fiction. But it has its champions, and most informed observers think it's inevitable that they'll win victories in coming years. There aren't even many objections to autonomous military hardware that doesn't kill: Think of drones that take off, surveil, and land entirely on autopilot. Is the day coming when drones like that are armed, programmed to seek out certain sorts of images, and automated to fire in certain circumstances? The short answer is that, given present trends, it's a realistic possibility.
Noel Sharkey is shaken and stirred -- and he isn't alone. As he recently noted in The Guardian, Human Rights Watch is so concerned that they're calling on international actors to immediately "prohibit the development, production and use of fully autonomous weapons through an international legally binding instrument; and adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons." Would that President Obama listened.
The same Guardian column flags a quotation from a recent Pentagon directive on autonomous weapons that is worth bearing in mind. The lethal machines of the future could conceivable fail due to "a number of causes," the directive states, "including but not limited to human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, or unanticipated situations on the battlefield."
The arguments against adopting this technology nearly make themselves. So I'll focus on a tangential but important observation.
In recent decades, America's elite -- its elected officials, bureaucrats, and CEOs, for starters -- have succeeded spectacularly at insulating themselves from responsibility for their failures. As the global economy melted down, Wall Street got bailouts. CEOs who preside over shrinking companies still depart with "golden parachute" severance packages. The foreign-policy establishment remains virtually unchanged despite the catastrophic errors so many of its members made during the Iraq War. The conservative movement is failing to advance its ideas or its political prospects, yet its institutional leaders and infotainment personalities are as profitable as ever. It is a self-evidently pernicious trend: Once someone achieves insider status, their future success is less and less dependent on how competently and responsibly they perform.
An innocent who dies at the handlessness of an automated killing machine? How easy to phrase the obligatory apology in the passive voice!
It's no wonder that some military leaders are so eager for the advent of autonomous weapons. At present, if WikiLeaks gets ahold of a video that shows innocents being fired upon, the incident in question can be traced back to an individual triggerman, an officer who gave him orders, and perhaps particular people who provided them with faulty intelligence. But an innocent who dies at the handlessness of an automated killing machine? How easy to phrase the obligatory apology in the passive voice! How implausible that any individual would be held culpable for the failure!
And would there ever be accusations that a supposed mistake was actually an intentional killing being conveniently blamed on autonomy gone wrong? I hardly think such suggestions would come from within the establishment. Levying that sort of accusation against a sitting president or the "brave men and women of the United States military" is precisely the sort of thing that bipartisan Washington, D.C., culture declares beyond the pale of reasonable discourse. And when autonomous Chinese drones "accidently" fire upon some Tibetans? Even a genuine accident in America's past would make it that much harder to refrain from giving the Chinese the benefit of the doubt that we ourselves had requested.
Most areas of American culture would benefit if the relevant set of elites were more accountable, but nowhere is in individual accountability more important than when death is being meted out. The secrecy that surrounds the national-security establishment and outsourcing drone strikes to the CIA already introduces a problematic lack of accountability to the War on Terrorism. Autonomous killing machines would exacerbate the problem more than anything else I can imagine.
Conor Friedersdorf is a staff writer at The Atlantic, where he focuses on politics and national affairs. He lives in Venice, California, and is the founding editor of The Best of Journalism, a newsletter devoted to exceptional nonfiction.
Einstein’s gravitational waves rest on a genuinely radical idea.
After decades of anticipation, we have directly detected gravitational waves—ripples in spacetime traveling at the speed of light through the universe. Scientists at LIGO (the Laser Interferometic Gravitational-wave Observatory) have announced that they have measured waves coming from the inspiral of two massive black holes, providing a spectacular confirmation of Albert Einstein’s general theory of relativity, whose hundredth anniversary was celebrated just last year.
Finding gravitational waves indicates that Einstein was (once again) right, and opens a new window onto energetic events occurring around the universe. But there’s a deeper lesson, as well: a reminder of the central importance of locality, an idea that underlies much of modern physics.
The bureau successfully played the long game in both cases.
The story of law enforcement in the Oregon standoff is one of patience.
On the most obvious level, that was reflected in the 41 days that armed militia members occupied the Malheur National Wildlife Refuge near Burns. It took 25 days before the FBI and state police moved to arrest several leaders of the occupation and to barricade the refuge. It took another 15 days before the last of the final occupiers walked out, Thursday morning Oregon time.
Each of those cases involved patience as well: Officers massed on Highway 395 didn’t shoot LaVoy Finicum when he tried to ram past a barricade, nearly striking an FBI agent, though when he reached for a gun in his pocket they finally fired. Meanwhile, despite increasingly hysterical behavior from David Fry, the final occupier, officers waited him out until he emerged peacefully.
Today’s empires are born on the web, and exert tremendous power in the material world.
Mark Zuckerberg hasn’t had the best week.
First, Facebook’s Free Basics platform was effectively banned in India. Then, a high-profile member of Facebook’s board of directors, the venture capitalist Marc Andreessen, sounded off about the decision to his nearly half-a-million Twitter followers with a stunning comment.
“Anti-colonialism has been economically catastrophic for the Indian people for decades,” Andreessen wrote. “Why stop now?”
After that, the Internet went nuts.
Andreessen deleted his tweet, apologized, and underscored that he is “100 percent opposed to colonialism” and “100 percent in favor of independence and freedom.” Zuckerberg, Facebook’s CEO, followed up with his own Facebook post to say Andreessen’s comment was “deeply upsetting” to him, and not representative of the way he thinks “at all.”
Most people know how to help someone with a cut or a scrape. But what about a panic attack?
Here’s a thought experiment: You’re walking down the street with a friend when your companion falls and gashes her leg on the concrete. It’s bleeding; she’s in pain. It’s clear she’s going to need stitches. What do you do?
This one isn’t exactly a head-scratcher. You'd probably attempt to offer some sort of first-aid assistance until the bleeding stopped, or until she could get to medical help. Maybe you happen to have a Band-Aid on you, or a tissue to help her clean the wound, or a water bottle she can use to rinse it off. Maybe you pick her up and help her hobble towards transportation, or take her where she needs to go.
Here’s a harder one: What if, instead of an injured leg, that same friend has a panic attack?
Ben Stiller’s follow-up to his own comedy classic is a downright bummer, no matter how many celebrity cameos it tries to cram in.
You don’t need to go to the theater to get the full experience of Zoolander 2. Simply get your hands on a copy of the original, watch it, and then yell a bunch of unfunny topical lines every time somebody tells a joke. That’s how it feels to watch Ben Stiller’s sequel to his 2001 spoof of the fashion industry: Zoolander 2 takes pains to reference every successful gag you remember from the original, and then embellish them in painful—often offensive, almost always outdated—fashion. It’s a film that has no real reason to exist, and it spends its entire running time reaffirming that fact.
The original Zoolander, to be fair, had no business being as funny as it was—it made fun of an industry that already seems to exist in a constant state of self-parody, and much of its humor relied on simple malapropisms and sight gags. But it was hilarious anyway as a candid snapshot of the fizzling-out of ’90s culture. Like almost any zeitgeist comedy, it belonged to a particular moment—and boy, should it have stayed there. With Zoolander 2, Stiller (who directed, co-wrote, and stars) tries to recapture the magic of 2001 by referencing its past glories with increasing desperation, perhaps to avoid the fact that he has nothing new to say about the fashion industry or celebrity culture 15 years laters.
By mining electronic medical records, scientists show the lasting legacy of prehistoric sex on modern humans’ health.
Modern humans originated in Africa, and started spreading around the world about 60,000 years ago. As they entered Asia and Europe, they encountered other groups of ancient humans that had already settled in these regions, such as Neanderthals. And sometimes, when these groups met, they had sex.
We know about these prehistoric liaisons because they left permanent marks on our genome. Even though Neanderthals are now extinct, every living person outside of Africa can trace between 1 and 5 percent of our DNA back to them. (I am 2.6 percent Neanderthal, if you were wondering, which pales in comparison to my colleague James Fallows at 5 percent.)
This lasting legacy was revealed in 2010 when the complete Neanderthal genome was published. Since then, researchers have been trying to figure out what, if anything, the Neanderthal sequences are doing in our own genome. Are they just passive hitchhikers, or did they bestow important adaptations on early humans? And are they affecting the health of modern ones?
The number of American teens who excel at advanced math has surged. Why?
On a sultry evening last July, a tall, soft-spoken 17-year-old named David Stoner and nearly 600 other math whizzes from all over the world sat huddled in small groups around wicker bistro tables, talking in low voices and obsessively refreshing the browsers on their laptops. The air in the cavernous lobby of the Lotus Hotel Pang Suan Kaew in Chiang Mai, Thailand, was humid, recalls Stoner, whose light South Carolina accent warms his carefully chosen words. The tension in the room made it seem especially heavy, like the atmosphere at a high-stakes poker tournament.
Stoner and five teammates were representing the United States in the 56th International Mathematical Olympiad. They figured they’d done pretty well over the two days of competition. God knows, they’d trained hard. Stoner, like his teammates, had endured a grueling regime for more than a year—practicing tricky problems over breakfast before school and taking on more problems late into the evening after he completed the homework for his college-level math classes. Sometimes, he sketched out proofs on the large dry-erase board his dad had installed in his bedroom. Most nights, he put himself to sleep reading books like New Problems in Euclidean Geometry and An Introduction to Diophantine Equations.
Jim Gilmore joins Chris Christie and Carly Fiorina, and leaves the race after a poor showing in New Hampshire.
Jim Gilmore’s candidacy this year was improbable—but even more improbable was the minor cult of personality that developed around it.
The former Virginia governor never had a chance. Not, like, in the sense of Lindsey Graham, a candidate with national standing but no path to the presidency. More in the George Pataki sense: a guy who had no real business in race, but was running anyway. Except that Gilmore made Pataki look like a juggernaut. Also, Pataki saw the writing on the wall and had the sense to drop out in late December. Gilmore soldiered on, and ended up as the last of the truly longshots to leave.
The result was that Gilmore turned into a sort of folk hero. Not for voters, mind you—he managed only 12 votes in Iowa and 125 in New Hampshire, and his campaign was funded largely by loans from himself. Because of his low support in the polls, Gilmore only made the cut for the very first kid’s-table debate in August, and then again for the undercard in late January. Other than that, he was shut out completely.
A robotic road safety worker in India, a sacrificial llama in Bolivia, a sea otter receives a valentine, a deadly earthquake in Taiwan, a leopard attack in India, and much more.
A murmuration of starlings over Israel, a robotic road safety worker in India, a sacrificial llama in Bolivia, border barriers between Tunisia and Libya, a sea otter receives a valentine, a deadly earthquake in Taiwan, the annual Shrovetide football match in England, a leopard attack in India, and much more.
Carly Fiorina’s exit from the 2016 race could stifle debate over gender equality across the political spectrum.
When Carly Fiorina dropped out of the presidential race, she took the opportunity to talk about the meaning of feminism—or at least advance her own definition of the term. “A feminist is a woman who lives the life she chooses and uses all her God-given gifts,” Fiorina wrote in a Facebook post on Wednesday. The message was familiar for Fiorina, a Republican candidate who used her most recent moment on the national stage to argue that women in America still face an uneven playing field.
Fiorina’s assertions lent credibility to the idea that gender inequality is not merely a lament of the political left, but a reality to be confronted by Republicans and Democrats. That message opened the door to debate over what kind of policy platform might best improve quality of life for women in America. Now that Fiorina has exited the race, it seems extremely unlikely that any Republican presidential contender will take up the mantle of talking about feminism and the challenges women face. The debate that Fiorina fostered will be far less prominent as a result.