Most Americans now know the feeling of typing something into a social media input box, thinking again, and deciding against posting whatever it was. But while it certainly seemed like a widespread phenomenon, no one had actually quantified the extent of this "self-censorship."
But now, new research based on a sample of 3.9 million Facebook users reveals precisely how widespread this activity is. Carnegie Mellon PhD student Sauvik Das and Facebook's Adam Kramer measured how many people typed more than five characters into Facebook content-input boxes, but then did not post them. They term this "last-minute self-censorship." The research was posted to Das' website and will presented at the Association for the Advancement of Artificial Intelligence's conference on Weblogs and Social Media in July.
The numbers are impressively large. Fully one-third of all Facebook posts were self-censored, according to the method Das and Kramer devised, though they warn they probably captured a substantial number of false positives. 71 percent of all the users surveyed engaged in some self-censorship either on new posts or in comments, and the median self-censorer did so multiple times.
Perhaps the most interesting part of the study was the demographic correlations with self-censorship. Men self-censored more often, particularly if they had large numbers of male friends. Interestingly, people with more diverse friend groups -- measured by age, political affiliation, and gender -- were less likely to self-censor.
While the researchers declined to speculate in this study about why people may or may not have self-censored, earlier research with a small group of users found five reasons people chose not share what they'd written: aversion to sparking an argument or other discussion, concern their post would offend or hurt someone, felt their post was boring or repetitive, decided the content undermined their desired self-presentation, or were just unable to post due to a technological or other constraint.
For Facebook users, the main takeaway here is probably: Feel free not to share. Facebook, on the other hand, has to have a more complex relationship to this research. Their interaction and business models depend on sharing, but it's not hard to imagine some circumstances in which it would be better not to share: racist content, say. So, Das and Kramer say that future research should address when the non-sharing is "adaptive," (which I think means good, in this context) and when, in the words of Das and Kramer, "users and their audience could fail to achieve potential social value from not sharing certain content, and the [social-network service] loses value from the lack of content generation."