Self-fulfilling rumors of ethnic violence spread like a virus across the newly wired India, sending 300,000 citizens fleeing and leading the government to extreme measures.
Smoke hangs over Mumbai at the scene of a violent protest by Muslims in response to unfounded rumors of anti-Muslim violence in a distant region. (AP)
Technology can be a great liberator, but can it sometimes be a public menace? The Indian government seems to think so: it has blocked around 250 websites, ordered Google and Facebook to pull content, threatened legal action against Twitter if it doesn't delete certain accounts, and has arrested several people for sending inflammatory text messages, all in the name of public safety. If you're appalled, you're not alone: the U.S. State Department responded by calling on India to respect "full freedom of the internet," highlighting the growing divide between the two governments on web freedom.
But the Indian censorship -- and it is censorship, despite the government's insistance otherwise -- may not be as clear-cut as a case of state oppression and over-reach. It turns out that the Indian government might be right to fear that technology, for all the very real benefits it's brought India, could also be helping to magnify ancient communal tensions in a ways that costs lives and, perhaps even worse, might destabilize the delicate social balance within the world's second-largest country.
The story begins, depending on how you look at it, either 20 years, one month, or one week ago. In 1993, two ethnic groups in the far-northeastern Indian state of Assam clashed over who had more of a right to the land: members of the local Bodo tribe won, and the Muslim Indians lost, fleeing into refugee camps. Last month, that conflict resurfaced, as it periodically does, when a few migrants from Assam got beaten up near the far-away city of Mumbai. No one really knows what happened, but the public perception seems to be that some of Mumbai's Muslims had attacked the Bodo migrants as revenge for the 1993 crisis. Then, last week, two sets of equally dangerous rumors spread across India: that Muslims throughout the country were about to attack northeastern migrants, and, in apparent response, that Bodo in their home-state of Assam were planning a pre-emptive strike on the area's Muslims.
That the two rumors appear to have been almost certainly unfounded is beside the point: they were mutually reinforcing. The more that people heard about them, the truer they became. Muslims, fearing their fellow believers in Assam were in mortal peril, staged a large protest in Mumbai. Northeastern migrants in the area, afraid the re-opening communal tensions could put them at risk, fled. Hearing about this back in Assam, some northeasterners perceived it as proof of coming Muslim violence, and, apparently enraged, attacked the region's Muslims. It's not hard to see how things spiraled out of control from there. By the end of the weekend, northeastern migrants were streaming onto trains to head home to Assam, and Muslims in Assam were fleeing en masse to refugee camps.
Technology didn't cause any of this, of course. But social media and text messaging, both of which are becoming increasingly common in reaches of India's enormous lower and middle classes, accelerated the flow of rumors and of inflammatory images. Some of the material turns out to have been fake: doctored images and videos showed anti-Muslim attacks that never happened. Because the rumors can be self-fulfilling, their lightening-fast spread across India's vast population, much of which is very newly connected to the web, can be costly. The original 1993 crisis displaced an estimated 20,000 people, but this most recent manifestation has already displaced 300,000, and killed 80. No doubt there are many factors that might explain the new severity of this old crisis, but with the spread of rumors apparently playing a significant role, the recent explosion in Indian Internet access rates (the 100 millionth Indian web users logged on in December) could be relevant. The government, unable to counter the destabilizing rumors, shut down some of the means of their dispersal.
Whether or not the Indian government's censorship does anything to calm this crisis, their apparent desperation is understandable. Still, India's readiness to censor the web is part of the government's longer-running effort to regulate the Internet, to which Western governments and web freedom advocates have strenuously objected. Some of India's sweeping restrictions compel web companies like Google and Facebook to self-police, and then self-censor, any content that could be perceived as blasphemous or offensive to ethnic groups. Protesters in India decry the restrictions as extreme, and they're not wrong.
When world governments in places like Ethiopia or China censor the internet, they tend to cite some version of the same basic idea: free discussion is a threat to "national stability." Typically, web freedom activists perceive this as little more than an excuse for online authoritarianism, and they're probably often correct. But what if, in India's case, the government could actually be right? Can Photoshopping up some "evidence" of ethnic attacks be akin to inciting violence? What about sending a text message falsely claiming such attacks, for which a Bangalore man was arrested? At what point does a Facebook rumor become a cry of "fire" in the crowded theatre of Indian ethnic anxieties?
Walter Russel Mead, writing on the ongoing crisis, called India's long-running communal tensions "the powder keg in the basement." With the already-dangerous risk of ethnic combustion heightened by a population with easy access to rumors and an apparent predisposition to believing them, maybe that powder keg justifies Indian censorship. Or maybe it doesn't; free speech is its own public good and public right, and, in any case, censoring discussion of such sensitive national issues could make it more difficult for India to actually confront them. This is just one of the many difficult questions that Indian leaders will grapple with as hundreds of thousands of their citizens flee their homes, chased out by "a swirl of unfounded rumors." I don't envy them.
The comedian's n-bomb at the White House Correspondents’ Dinner highlights a generational shift in black culture.
Georgia McDowell was born the daughter of farmers and teachers in North Carolina in 1902. She was my great-grandmother, and she taught me to read, despite the dementia that clouded her mind and the dyslexia that interrupted mine. I loved Miss Georgia, though she kept as many hard lines in her home as she had in her classrooms. One of the hardest lines was common to many black households: The word “nigger” and all of its derivatives were strict taboos in person, on television, and on radio from any source, black or otherwise, so long as she lived and breathed. She’d kept the taboo through decades of teaching black students and raising black children. For most of my childhood, the taboo was absolute.
When Apple announced in 2013 that its next iPhone would include a fingerprint reader, it touted the feature as a leap forward in security. Many people don’t set up a passcode on their phones, Apple SVP Phil Schiller said at the keynote event where the Touch ID sensor was unveiled, but making security easier and faster might convince more users to protect their phones. (Of course, Apple wasn’t the first to stuff a fingerprint reader into a flagship smartphone, but the iPhone’s Touch ID took the feature mainstream.)
The system itself proved quite secure—scanned fingerprints are stored, encrypted, and processed locally rather than being sent to Apple for verification—but the widespread use of fingerprint data to unlock iPhones worried some experts. One of the biggest questions that hung over the transition was legal rather than technical: How might a fingerprint-secured iPhone be treated in a court of law?
The billionaire’s bid for the nomination was opposed by many insiders—but his success reveals the ascendance of other elements of the party coalition.
In The Party Decides, an influential book about how presidential nominees are selected, political scientists John Zaller, Hans Noel, David Karol, and Marty Cohen argue that despite reforms designed to wrest control of the process from insiders at smoke-filled nominating conventions, political parties still exert tremendous influence on who makes it to general elections. They do so partly through “invisible primaries,” the authors posited—think of how the Republican establishment coalesced around George W. Bush in 2000, long before any ballots were cast, presenting him as a fait accompli to voters who’d scarcely started to think about the election; or how insider Democrats elevated Hillary Clinton this election cycle.
Nearly half of Americans would have trouble finding $400 to pay for an emergency. I’m one of them.
Since 2013,the Federal Reserve Board has conducted a survey to “monitor the financial and economic status of American consumers.” Most of the data in the latest survey, frankly, are less than earth-shattering: 49 percent of part-time workers would prefer to work more hours at their current wage; 29 percent of Americans expect to earn a higher income in the coming year; 43 percent of homeowners who have owned their home for at least a year believe its value has increased. But the answer to one question was astonishing. The Fed asked respondents how they would pay for a $400 emergency. The answer: 47 percent of respondents said that either they would cover the expense by borrowing or selling something, or they would not be able to come up with the $400 at all. Four hundred dollars! Who knew?
It’s a paradox: Shouldn’t the most accomplished be well equipped to make choices that maximize life satisfaction?
There are three things, once one’s basic needs are satisfied, that academic literature points to as the ingredients for happiness: having meaningful social relationships, being good at whatever it is one spends one’s days doing, and having the freedom to make life decisions independently.
But research into happiness has also yielded something a little less obvious: Being better educated, richer, or more accomplished doesn’t do much to predict whether someone will be happy. In fact, it might mean someone is less likely to be satisfied with life.
That second finding is the puzzle that Raj Raghunathan, a professor of marketing at The University of Texas at Austin’s McCombs School of Business, tries to make sense of in his recent book, If You’re So Smart, Why Aren’t You Happy?Raghunathan’s writing does fall under the category of self-help (with all of the pep talks and progress worksheets that that entails), but his commitment to scientific research serves as ballast for the genre’s more glib tendencies.
The Massachusetts Supreme Court will decide whether a local shrine should be tax-exempt—a decision that could have broad implications for faith organizations in America.
Property-tax battles are rarely sexy. But a case now in front of the Massachusetts Supreme Judicial Court, about whether the 21 religious brothers and sisters who run the Shrine of Our Lady of LaSalette in Attleboro should have to pay taxes, could have huge repercussions. The Court’s decision will be an important part of the ongoing debate in America about who defines religious practice—believers or bureaucrats—and whether religion itself should be afforded a special place under the law.
The case centers on a colonial-era law in Massachusetts that exempts religious houses of worship and parsonages from property taxes if they are used for religious worship or instruction. The shrine has enjoyed this perk since its founding in 1953. But in recent years, the City of Attleboro, nestled between Providence and Boston, has faced a tightening budget. It began looking to see where it could collect more revenue. The shrine, the only major tourist attraction in town, was an obvious target for tax collectors.
For some, abandoning expensive urban centers would be a huge financial relief.
Neal Gabler has been a formative writer for me: His Winchell: Gossip, Power, and the Culture of Celebrity was one of the books that led me to think about leaving scholarship behind and write nonfiction instead, and Walt Disney: The Triumph of the American Imagination was the first book I reviewed as a freelance writer. To me, he exemplifies the best mix of intensive archival research and narrative kick.
So reading his recent essay, "The Secret Shame of Middle-Class Americans," was a gut punch: First, I learned about a role model of mine whose talent, in my opinion, should preclude him from financial woes. And, then, I was socked by narcissistic outrage: I, too, struggle with money! I, too, am a failing middle-class American! I, too, am a writer of nonfiction who should be better compensated!
A professor of cognitive science argues that the world is nothing like the one we experience through our senses.
As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Three Atlantic staffers discuss “Home,” the second episode of the sixth season.
Every week for the sixth season of Game of Thrones, Christopher Orr, Spencer Kornhaber, and Lenika Cruz will be discussing new episodes of the HBO drama. Because no screeners are being made available to critics in advance this year, we'll be posting our thoughts in installments.
The newly discovered worlds are now the most promising targets in the search for life among the stars—and the race to take a closer look at them has begun.
The robot telescope settles on its target, a star that sits closer than all but a tiny fraction of the tens of billions of stellar systems that make up the Milky Way. Its mirror grabs light for 55 seconds, again and again. The robot telescope—called TRAPPIST—will observe the star for 245 hours across sixty-two nights, making 12,295 measurements. Eleven times, it will see the star dim, ever so slightly. This dip in luminosity, called a transit, has a straightforward astronomical explanation: It’s a planet passing in front of the star, blocking just a bit of its light. In this case, the transits tell us that 3 planets orbit the star.
“So what?” you might think.
Astronomers have been spotting planets around distant stars for years now, using the transit method, among others. Not a month goes by without a headline, touting the discovery of new “exoplanets.” But these planets are different, and not only because they’re near. Like the Earth these planets could potentially permit liquid water to persist on their surfaces—which is thought to be a key pre-condition for the emergence of life. Today, when their discovery is published in Nature, they will instantly become the most promising planets yet found in the search for life among the stars.