Is Facebook making us
lonely connected dumb happy complacent racist?
Maybe, according to a new study. At the very least, the way heavy Facebook users read "news" on the site might make them more likely to accept racist messages among the baby photos and semi-targeted advertising.
Psychologists Shannon Rauch and Kimberley Schanz published their work in the journal Computers in Human Behavior. They sampled 623 Internet users (all white, 70 percent students), asking them to indicate the frequency of their Facebook usage. The group then read one of three versions of a Facebook Notes page they were told was written by a 26-year-old named Jack Brown. "Jack" was white and male. The first version of Jack's message contained what the researchers call a "superiority message": It "contrasted the behaviors of black and white individuals, only to find consistent superiority of the whites." The second version offered a "victim message," with Jack suggesting in his message that "whites are the most oppressed racial group in America." The third version offered an "egalitarian message," with Jack offering examples of racism he'd witnessed against black individuals, and concluding that discrimination, despite the progress we've made, still "exists widely today."
The researchers then asked participants, for each version of the post, to rate factors like "how much they agreed with the message," "how accurate they found it," "how much they liked the writer," and, significantly, how likely they were to share the post with others -- either to propagate it or to argue against it.
Their findings? "Frequent users are particularly disposed to be influenced by negative racial messages." The group of more-frequent Facebook users didn't differ from others in their reaction to the egalitarian message. But those users "were more positive toward the messages with racist content -- particularly the superiority message."
So, oof. This is, to say the least, troubling. And yet it's also not fully surprising. The study itself, in fact, is confirming the hypothesis that Rauch and Schanz started with: "We predict," they noted, "that due to potential chronic traits and/or their adaptation to a Facebook culture of shallow processing and agreement, frequent Facebook users are highly susceptible to persuasive messages compared to less frequent users."
Facebook, for all the unprecedented connection it fosters among previously atomized people, fosters a very particular kind of connection: one that is mediated, at all times, by Facebook. And one that therefore makes very particular kinds of assumptions about how and why people connect in the first place. Facebook "connection" is defined -- semantically, at least -- by friendship. ("Facebook friends," "friending people," etc.) While it doesn't assume that every connection is an actual friend, in the narrow and maybe even old-fashioned sense of the word, Facebook's infrastructure does assume esteem among people who friend each other. (Compare that to LinkedIn, or even Twitter, which tend to take a much more pragmatic view of human interaction.)
Facebook, as a result, is structured as an aggressively upbeat place. And one potential cause of that is an overall atmosphere of social complicity. You can argue on Facebook, but it is not really encouraged. And the interactions Facebook fosters as it expands -- the status updates, the information sharing, the news consumption -- stem from that default-positive place. "Like," but not "Dislike." "Recommend," but not "Reject."
This is, of course, mostly to the good. The Internet has enough vitriol as it is; who wants to be on a site where everyone's disliking things? The question, though, is whether complicity leads to complacency. Particularly when it's made structural, as it so often is within Facebook's environment. Does this resolutely uncritical atmosphere harm people's ability to think critically?
While "it's all correlational right now," Rauch told me, the results she and Schanz got in their research could be due to the "atmosphere of agreement that Facebook provides" -- which, in turn, could lead to a "tendency for shallow processing" when it comes to the information being consumed on Facebook. Again, though: correlational. Heavy users of Facebook tend to use the site because of a desire for social inclusion. In that context, the study suggests, those users are primed to agree with fellow users rather than to criticize the information those users share. And not just in terms of their public interactions, but in terms of their private beliefs. This potent combination -- "a need to connect and an ethos of shallow processing" -- provides a warm, moist breeding ground for the spread of opinions, publicly and not-so-publicly. Racist ones among them.
That's significant, because Facebook wants to expand from social connection into informational connection. The News Feed as the "personalized newspaper"; the just-introduced Home as a mobile locus of that newspaper. "We're getting so much of our news from Facebook," Rauch notes. But what will be the consequences of that shift? That's what she and Schanz are, finally, asking. What will happen if information gets fully social -- according to Facebook's definition of "fully social"? What will take place when the Jack Browns of the world aren't just our friends, but our news sources?