Over at the Wall Street Journal, Dylan Wittkower, editor of Facebook and Philosophy, shifted the focus to Facebook. Although Facebook users address a more intimate community than folks chiming in on the comments section of a story, Wittkower observes a similar problem arising when hotbed topics like gun control arise: "Facebook brings us into several new dynamics that intensify what seems to be already a predisposition for many: the inability to listen to someone say something wrong about something important and not say something about it."
What should be done to change comments sections if asking people to be nicer is a naïve pipedream? Should we eliminate anonymous posting? Doing so would raise a host of well-documented free speech issues. Concern is even being raised over the merits of YouTube forcing its users to use their real names when commenting on videos. Elias Aboujaoude, author of Virtually You: The Dangerous Powers of the E-Personality, tells us that gains in civility could come at the expense of losses in "creativity energy and innovation."
So, for the sake of argument, let's say eliminating anonymity goes too far. Is there a less restrictive way to enhance civility? Following the lead of Cass Sunstein and Richard Thaler, I think there might be; it involves nudging.
Nudging is a distinctive way to help people make good decisions. It differs from the typical ways of attempting to change behavior: rational persuasion (e.g, providing new information), coercion (e.g., using threats to ensure compliance), adjusting financial incentives (e.g., paying students to get good grades) and bans (e.g., prohibiting smoking in restaurants). And, it has a limited domain of application: contexts where decisions need to be made, but we lack adequate time, information, or emotional wherewithal to know how to act in ways that further our best interests. In these cases, nudges work by subtly tweaking the contexts within which we make choices so that, on average, we will tend to make good ones.
Take ToneCheck, the emotional analogue to a spell checking tool. It is a nudge for those of us who can't resist sending flaming emails. Applying connotative intelligence research to "automatically detect the tone" of your email," it offers the author a warning (that can prompt revision) if a draft exceeds the threshold for negative emotions (e.g., anger or sadness). The author has been nudged.
What if magazines, newspapers, and blogs required readers to use a version of ToneCheck before entering comments? Bracketing the question of how effective the technology would be under current constraints, let's imagine that in red coloring -- the emotionally impacting teacher's corrective -- it provides readers with accurate feedback and says things like, "Your note uses very strong language, do you want to still send it?" Or, perhaps a more potent version could be adopted. One that also uses scarlet lettering and says, "Because your note uses very strong language and might offend others, it will be placed in digital lockdown for 15 minutes. If you still want it to send it then, click on this link."