When news of Facebook’s attempt to emotionally manipulate its users emerged this weekend, debate quickly focused on the experiment’s ethics.
Lauren McCarthy, though, kept thinking about the experiment itself.
“I was taken aback” at first, said McCarthy, an artist, programmer, and researcher-in-residence at New York University’s Interactive Telecommunications Program. But as discussion went on, she found that “no one was talking about what the study might mean. What could we do beyond the ethics?”
Now, she has a preliminary answer. McCarthy has made a browser extension, Facebook Mood Manipulator, that lets users run Facebook’s experiment on their own News Feeds. Just as the original 2012 study surfaced posts analyzed to be either happier or sadder than average, McCarthy’s extension skews users’ feeds either more positive or more negative—except that, this time, users themselves control the dials.
McCarthy’s extension is even powered by the same text analysis software that Facebook used: Linguistic Inquiry and Word Count 2007, or LIWC, developed by researchers at the University of Texas at Austin. But unlike the Facebook study, which only surfaced posts judged happier or sadder, McCarthy’s software also lets people see posts that use more “aggressive” or “open” words in their feed.
The extension, in other words, lets users reclaim some control over their own feed. As the sociologist Zeynep Tufecki wrote on Twitter today, the Facebook story grabs our attention because it’s “about who regulates our life experience.” McCarthy’s extension lets users do some of that regulation for themselves, and it alludes to the possibility that such self-regulation could be the norm.
“What does it mean that we can manipulate our moods? What is it like to have this tech?,” she asked. “You wake up in the morning and decide what kind of mood you want to have.”
In other words, McCarthy says, this technology could become "like an interface for your mind.” Of course, for now, that mind-interface is imperfect. McCarthy has worked with LIWC before, and she echoed other critics in saying that the analysis software isn’t designed for text snippets as short as Facebook statuses.
“This is really not that accurate for a short sentence or post,” she said. “This whole system was made based on speeches or long bodies of spoken text. It’s not intended for fewer than 100 words, and its accuracy increases as you get into a couple paragraphs.”
And while that makes its social media filtration—and the findings of the original Facebook experiment—a little guessy, that’s sort of the point. “When you manipulate things, it’s a little subtle what’s changing. There’s not like a positive mode or a negative mode,” she said.
Her extension lets users discover what it’s like to wrestle with their own attentional algorithm—as subtle, or as stupid, as it can sometimes be. That’s what I found, at least, when I played around with the extension this morning. When I cranked the “positive” slider all the way to the right, my feed went blank, refreshed—and filled with sad U.S. soccer-related lamentations. It did not make me feel particularly happier.
In fact, only when I told the extension to surface “emotional” content, regardless of positivity or negativity, did I notice a real change in tenor. Two high school classmates had had their first child, and their hospital pictures appeared in my feed. Last night, too, an old camp counselor of mine made his Broadway debut. I saw his status and read the many congratulations. I even liked it.
I wouldn’t have seen either of those posts if I hadn’t used McCarthy’s extension—I hadn’t seen them when I used “normal” Facebook this morning, when both were already more than 12 hours old. For a couple seconds, I felt a little more cheerful.
Or maybe I’m just projecting, having already been numbed to the sad, sad soccer statuses.
McCarthy’s extension is free and only available for Google Chrome. You can download it from her website.