The social network is coming under some serious fire today for a mood study it conducted back in 2012. Over at the Atlantic, Robinson Meyer explains what it was all about:
For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves."
The results were logged and analyzed for a study on "emotional contagion" released in the Proceedings of the National Academy of Sciences. Facebook users had no idea.
Though it was blessed as legal, the question now is whether it was ethical. As Adrienne LaFrance writes, also at the Atlantic, even Susan Fiske, the editor of the study, had some serious concerns about it.
"People are supposed to be, under most circumstances, told that they're going to be participants in research and then agree to it and have the option not to agree to it without penalty."
The Wire reached out to Jacob Silverman, whose book Terms of Service: Social Media, Surveillance, and the Price of Constant Connection comes out next year, to get more insights about what this all means.
So what does this all mean?
"In some ways, the Facebook study confirms things we already knew. Some past researchers have found that an initial positive or negative vote on a social site can prime users to vote in that direction. Sentiment appears to be contagious, or at least influential. And many new media sites — HuffPo, Upworthy, Buzzfeed, Business Insider — do a lot of A/B testing and data analysis to try to encourage readership, shares, time spent on site, and the like. Headlines are reworked, articles rearranged, etc. — all to keep you tuned in. In short, the internet is a vast collection of market research studies; we're the subjects."
How the Facebook study was different
"What's disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission (blame the terms of service agreements we all opt into). This research may tell us something about online behavior, but it's undoubtedly more useful for, and more revealing of, Facebook's own practices."
How this fits within Facebook's own practices
"Facebook cares most about two things: engagement and advertising. If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there's little reason to think that they won't do just that. As long as the platform remains such an important gatekeeper — and their algorithms utterly opaque — we should be wary about the amount of power and trust we delegate to it."
The internet had some feelings about all of this:
The nice thing is that Facebook's longstanding amoral bullshittery hasn't been a social contagion in the tech world or anything. OH WAIT.— dan sinker (@dansinker) June 28, 2014
Wait a second: maybe the Facebook study is itself an experiment to see if it will make me write angry things on Twitter.— emilynussbaum (@emilynussbaum) June 28, 2014
This article is from the archive of our partner The Wire.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.