Everything We Know About Facebook's Secret Mood Manipulation Experiment

In other words, the experiment had already been run, so its human subjects were beyond protecting. Assuming the researchers did not see users’ confidential data, the results of the experiment could be examined without further endangering any subjects.

Both Cornell and Facebook have been reluctant to provide details about the process beyond their respective prepared statments. One of the study's authors told The Atlantic on Monday that he’s been advised by the university not to speak to reporters.

By the time the study reached Susan Fiske, the Princeton University psychology professor who edited the study for publication, Cornell’s IRB members had already determined it outside of their purview.

Fiske had earlier conveyed to The Atlantic that the experiment was IRB-approved. 

“I was concerned,” Fiske told The Atlantic on Saturday, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time.”

On Sunday, other reports raised questions about how an IRB was consulted. In a Facebook post on Sunday, study author Adam Kramer referenced only “internal review practices.” And a Forbes report that day, citing an unnamed source, claimed that Facebook only used an internal review.

When The Atlantic asked Fiske to clarify Sunday, she said the researchers’ “revision letter said they had Cornell IRB approval as a ‘pre-existing dataset’ presumably from FB, who seems to have reviewed it as well in some unspecified way... Under IRB regulations, pre-existing dataset would have been approved previously and someone is just analyzing data already collected, often by someone else.”

The mention of a “pre-existing dataset” here matters because, as Fiske explained in a follow-up email, "presumably the data already existed when they applied to Cornell IRB.” (She also noted: “I am not second-guessing the decision.”) Cornell’s Monday statement confirms this presumption. 

On Saturday, Fiske said that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.”

“It's ethically okay from the regulations perspective, but ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done...I'm still thinking about it and I'm a little creeped out, too.”

For more, check Atlantic editor Adrienne LaFrance’s full interview with Prof. Fiske.

From what we know now, were the experiment’s subjects able to provide informed consent?

In its ethical principles and code of conduct, the American Psychological Association (APA) defines informed consent like this:

When psychologists conduct research or provide assessment, therapy, counseling, or consulting services in person or via electronic transmission or other forms of communication, they obtain the informed consent of the individual or individuals using language that is reasonably understandable to that person or persons except when conducting such activities without consent is mandated by law or governmental regulation or as otherwise provided in this Ethics Code.

As mentioned above, the research seems to have been carried out under Facebook’s extensive terms of service. The company’s current data use policy, which governs exactly how it may use users’ data, runs to more than 9,000 words and uses the word “research” twice. But as Forbes writer Kashmir Hill reported Monday night, the data use policy in effect when the experiment was conducted never mentioned “research” at all—the word wasn’t inserted until May 2012

Never mind whether the current data use policy constitutes “language that is reasonably understandable”: Under the January 2012 terms of service, did Facebook secure even shaky consent?

The APA has further guidelines for so-called “deceptive research” like this, where the real purpose of the research can’t be made available to participants during research. The last of these guidelines is:

Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data. 

At the end of the experiment, did Facebook tell the user-subjects that their News Feeds had been altered for the sake of research? If so, the study never mentions it.

James Grimmelmann, a law professor at the University of Maryland, believes the study did not secure informed consent. And he adds that Facebook fails even its own standards, which are lower than that of the academy:

A stronger reason is that even when Facebook manipulates our News Feeds to sell us things, it is supposed—legally and ethically—to meet certain minimal standards. Anything on Facebook that is actually an ad is labelled as such (even if not always clearly.) This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded.

Did the U.S. government sponsor the research?

Cornell has now updated their June 10 story to say that the research received no external funding. Originally, Cornell had identified the Army Research Office, an agency within the U.S. Army that funds basic research in the military’s interest, as one of the funders of their experiment.

Do these kind of News Feed tweaks happen at other times? 

At any one time, Facebook said last year, there were on average 1,500 pieces of content that could show up in your News Feed. The company uses an algorithm to determine what to display and what to hide.

It talks about this algorithm very rarely, but we know it’s very powerful. Last year, the company changed News Feed to surface more news stories. Websites like BuzzFeed and Upworthy proceeded to see record-busting numbers of visitors.

So we know it happens. Consider Fiske’s explanation of the research ethics here—the study was approved “on the grounds that Facebook apparently manipulates people's News Feeds all the time.” And consider also that from this study alone Facebook knows at least one knob to tweak to get users to post more words on Facebook. 


* This post originally stated that an institutional review board, or IRB, was consulted before the experiment took place regarding certain aspects of data collection. 

Adrienne LaFrance contributed writing and reporting.

Jump to comments
Presented by

Robinson Meyer is an associate editor at The Atlantic, where he covers technology.

Get Today's Top Stories in Your Inbox (preview)

This Short Film Skewers Hollywood, Probably Predicts Disney's Next Hit

A studio executive concocts an animated blockbuster. Who cares about the story?


Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

A Short Film That Skewers Hollywood

A studio executive concocts an animated blockbuster. Who cares about the story?

Video

In Online Dating, Everyone's a Little Bit Racist

The co-founder of OKCupid shares findings from his analysis of millions of users' data.

Video

What Is a Sandwich?

We're overthinking sandwiches, so you don't have to.

Video

How Will Climate Change Affect Cities?

Urban planners and environmentalists predict the future of city life.

Video

The Inner Life of a Drag Queen

A short documentary about cross-dressing, masculinity, identity, and performance

Video

Let's Talk About Not Smoking

Why does smoking maintain its allure? James Hamblin seeks the wisdom of a cool person.

Writers

Up
Down

More in Technology

Just In