Sharing is human. We are social. We communicate. We learn from each other. Our first conversations with people we don't know are anecdote competitions. If in the 15th century everyone had owned a printing press, Europe would have been littered with personal missives and opinions. Cameras were one of the first mass-market story-telling devices, and stories were told. Then: curated, bundled, and shared.
The genius of Facebook has always been its facilitation of sharing. Its pivotal innovation -- the one that inspired its first rash of furious remonstrations -- was the automatic sharing of news feeds between friends. In the Friendster/MySpace world, users could visit their friends' feeds, but they did not receive them passively. Facebook's decision to push these feeds out to users' contacts led to howls about privacy -- and that's what made the service a sensation.
Facebook's role in our world is to lead us where we're headed. We like to share who we are and what we like. We're consumers who pay more for things stamped with particular logos, after all; we shouldn't be taken aback when someone tries to spread that idea. Facebook has been there for almost a decade, guiding us toward a place where displays of what we're doing and where we are become the simple documentations of the life of an average Joe.
The company's biggest struggle has been figuring out how to make money from it. An early effort, Beacon, was a flop. People are happy to share information -- photos, stories, links, videos -- but only information they have carefully selected. Beacon took it upon itself to share information about online purchases and transactions -- and people revolted. It was Facebook's most notable failure, and it stemmed from sharing that didn't derive from the user.
Last year's launch of Open Graph began an exploration of how to work around that. It combined two innovations: the global Like button and the ability of some sites to pull information from Facebook without your agreeing to it. Beacon lite. This met with outcry -- I'm losing control over my information! -- which quickly subsided as it became apparent that the intrusion was minimal. People weren't interested in your Pandora stations, but Facebook cracked the door toward using your information the way it wanted.
Slate's Farhad Manjoo has perhaps the savviest take on the innovations Facebook announced yesterday. In addition to Timeline -- the elegant, deep presentation of a user's Facebook history -- the company revealed that it sought to make sharing information "frictionless," which is to say, automatic. Watch a movie or listen to a song and it gets shared, without the tedium of your clicking anything.
The problem with that, of course, is that it eliminates the curation aspect of our self-presentations. It would be as though I told everyone that I was wearing blue jeans and a somewhat worse-for-wear t-shirt right now in addition to revealing that earlier today I wore a sharp, tailored suit. Both are accurate, but only one is the impression I'd like to leave with people. (The latter.) Talking about the suit is Facebook. Talking about my scrubby jeans is Beacon.
I used to work at Adobe. One summer, the company brought in a number of well-known
artists to work on a project, one of whom was a photographer. Using Photoshop, he cleaned up his photos of the other participants, noting that "a photo is not meant to be a dermatological
record." This is extensible: the image we present to the world is not
meant to include every single bit of information possible. What we share is
selected to be a representation of the ideal we want to project, not a
reflection of who we are. Our curation itself is representative; what we don't
say says something, too. Facebook moving curation from us to its algorithms
means we could lose some of our personality in what we present. It's akin to
putting every photo in a photo album, and letting the album worry about what
But this is incidental. Facebook anticipates -- correctly -- that we want easy processes to share more and more about ourselves. Or, at least, that we will soon. We've always wanted simple ways to scrapbook, and Facebook is poised to be one of the simplest.
Where they may have missed the mark is in taking away our ability to decide what we show.
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health.
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
The drug modafinil was recently found to enhance cognition in healthy people. Should you take it to get a raise?
If you could take a pill that will make you better at your job, with few or no negative consequences, would you do it?
In a meta-analysis recently published in European Neuropsychopharmacology, researchers from the University of Oxford and Harvard Medical School concluded that a drug called modafinil, which is typically used to treat sleep disorders, is a cognitive enhancer. Essentially, it can help normal people think better.
Out of all cognitive processes, modafinil was found to improve decision-making and planning the most in the 24 studies the authors reviewed. Some of the studies also showed gains in flexible thinking, combining information, or coping with novelty. The drug didn’t seem to influence creativity either way.
But no tale of posthumous success is quite as spectacular as that of Howard Phillips Lovecraft, the “cosmic horror” writer who died in Providence, Rhode Island, in 1937 at the age of 46. The circumstances of Lovecraft’s final years were as bleak as anyone’s. He ate expired canned food and wrote to a friend, “I was never closer to the bread-line.” He never saw his stories collectively published in book form, and, before succumbing to intestinal cancer, he wrote, “I have no illusions concerning the precarious status of my tales, and do not expect to become a serious competitor of my favorite weird authors.” Among the last words the author uttered were, “Sometimes the pain is unbearable.” His obituary in the Providence Evening Bulletin was “full of errors large and small,” according to his biographer.
As the vice president edges toward a presidential run, is he banking on further public disclosures to discredit the frontrunner?
As Joe Biden edges closer to a presidential run, there’s no shortage of theories as to what he’s up to. Former secretary of state Hillary Clinton has built a commanding lead in the national polls, giving Biden little apparent space to gain traction. Perhaps he’s counting on the early-primary state of South Carolina to provide a critical boost. He might be banking on appearing as a stronger general-election candidate than any of his potential rivals in the primary race. Maybe after spending the past 42 years of his life running for elective office, he just can’t stop.
But there’s one intriguing theory that has so far garnered little attention: What if Biden knows something about Democratic frontrunner Hillary Clinton that the rest of us don’t?
It is not too late to strengthen the Iran deal, a prominent critic says.
It appears likely, as of this writing, that Barack Obama will be victorious in his fight to implement the Iran nuclear deal negotiated by his secretary of state, John Kerry. Republicans in Congress don’t appear to have the votes necessary to void the agreement, and Benjamin Netanyahu’s campaign to subvert Obama may be remembered as one of the more counterproductive and shortsighted acts of an Israeli prime minister since the rebirth of the Jewish state 67 years ago.
Things could change, of course, and the Iranian regime, which is populated in good part by extremists, fundamentalist theocrats, and supporters of terrorism, could do something monumentally stupid in the coming weeks that could force on-the-fence Democrats to side with their Republican adversaries (remember the Café Milano fiasco, anyone?). But, generally speaking, the Obama administration, and its European allies, seem to have a clearer path to implementation than they had at the beginning of the month.
A new study shows that the field suffers from a reproducibility problem, but the extent of the issue is still hard to nail down.
No one is entirely clear on how Brian Nosek pulled it off, including Nosek himself. Over the last three years, the psychologist from the University of Virginia persuaded some 270 of his peers to channel their free time into repeating 100 published psychological experiments to see if they could get the same results a second time around. There would be no glory, no empirical eurekas, no breaking of fresh ground. Instead, this initiative—the Reproducibility Project—would be the first big systematic attempt to answer questions that have been vexing psychologists for years, if not decades. What proportion of results in their field are reliable?
In 1998, Toni Morrison wrote a comment for The New Yorker arguing that “white skin notwithstanding, this is our first black President. Blacker than any actual black person who could ever be elected in our children’s lifetime.” Last week the New York Times, implicitly cited Morrison’s piece, and claimed the author was giving Clinton “a compliment.” This interpretation of Morrison’s claim is as common as it is erroneous.
The popular interpretation of Morrison’s point (exhibited here) holds that, summoning all of her powers, the writer gazed into the very essence of Clinton, and found him sufficiently soulful. In fact, Morrison’s point had little to do with soul of any kind. She was not much concerned with Clinton’s knowledge of Ebonics, his style of handshake, nor whether he pledged Alpha or Q. Morrison was concerned with power.
A new study finds an algorithmic word analysis is flawless at determining whether a person will have a psychotic episode.
Although the language of thinking is deliberate—let me think, I have to do some thinking—the actual experience of having thoughts is often passive. Ideas pop up like dandelions; thoughts occur suddenly and escape without warning. People swim in and out of pools of thought in a way that can feel, paradoxically, mindless.
Most of the time, people don’t actively track the way one thought flows into the next. But in psychiatry, much attention is paid to such intricacies of thinking. For instance, disorganized thought, evidenced by disjointed patterns in speech, is considered a hallmark characteristic of schizophrenia. Several studies of at-risk youths have found that doctors are able to guess with impressive accuracy—the best predictive models hover around 79 percent—whether a person will develop psychosis based on tracking that person’s speech patterns in interviews.