Nudge, Nudge: Can Software Prod Us Into Being More Civil?

Maybe the answer for making online comments more thoughtful isn't in people, but in code.

Could software such as ToneCheck be applied not just to email but blog comments?

The closer we get to the presidential election, the more concern gets raised about how divided the country is and how acrimonious our discussions are over fundamental issues. Attack ads aren't the only problem. The comments sections on web pages and blogs are overflowing with bitterness. The mood expressed there shows such heightened signs of technological influence, it seems ripped from the pages of the Marshall McLuhan playbook: the medium of communication is influencing the messages people send and receive. The best solution, then, might be for magazines, newspapers, and blogs to address the root problem by hacking the source: re-designing the structure of the forum to encourage civility. Before considering whether we want to go there, let's quickly review why the medium matters.

At Scientific American, the hyperbolically titled "Why Is Everyone on the Internet So Angry?" asked why so many readers post hostile and rude comments on controversial Web stories. The answer? A "perfect storm of factors": anonymity lessens personal accountability; distance from our conversation partners makes us treat them as abstractions, not human beings; it's easier to be mean to someone when addressing them through writing rather than through speech; armchair commentary provides a false sense of accomplishment; and, a lack of real-time flow in the conversation encourages monologues.

Over at the Wall Street Journal, Dylan Wittkower, editor of Facebook and Philosophy, shifted the focus to Facebook. Although Facebook users address a more intimate community than folks chiming in on the comments section of a story, Wittkower observes a similar problem arising when hotbed topics like gun control arise: "Facebook brings us into several new dynamics that intensify what seems to be already a predisposition for many: the inability to listen to someone say something wrong about something important and not say something about it."

What should be done to change comments sections if asking people to be nicer is a naïve pipedream? Should we eliminate anonymous posting? Doing so would raise a host of well-documented free speech issues. Concern is even being raised over the merits of YouTube forcing its users to use their real names when commenting on videos. Elias Aboujaoude, author of Virtually You: The Dangerous Powers of the E-Personality, tells us that gains in civility could come at the expense of losses in "creativity energy and innovation."

So, for the sake of argument, let's say eliminating anonymity goes too far. Is there a less restrictive way to enhance civility? Following the lead of Cass Sunstein and Richard Thaler, I think there might be; it involves nudging.

Nudging is a distinctive way to help people make good decisions. It differs from the typical ways of attempting to change behavior: rational persuasion (e.g, providing new information), coercion (e.g., using threats to ensure compliance), adjusting financial incentives (e.g., paying students to get good grades) and bans (e.g., prohibiting smoking in restaurants). And, it has a limited domain of application: contexts where decisions need to be made, but we lack adequate time, information, or emotional wherewithal to know how to act in ways that further our best interests. In these cases, nudges work by subtly tweaking the contexts within which we make choices so that, on average, we will tend to make good ones.

Take ToneCheck, the emotional analogue to a spell checking tool. It is a nudge for those of us who can't resist sending flaming emails. Applying connotative intelligence research to "automatically detect the tone" of your email," it offers the author a warning (that can prompt revision) if a draft exceeds the threshold for negative emotions (e.g., anger or sadness). The author has been nudged.

What if magazines, newspapers, and blogs required readers to use a version of ToneCheck before entering comments? Bracketing the question of how effective the technology would be under current constraints, let's imagine that in red coloring -- the emotionally impacting teacher's corrective -- it provides readers with accurate feedback and says things like, "Your note uses very strong language, do you want to still send it?" Or, perhaps a more potent version could be adopted. One that also uses scarlet lettering and says, "Because your note uses very strong language and might offend others, it will be placed in digital lockdown for 15 minutes. If you still want it to send it then, click on this link."

Neither option censors speech. In both cases people can still choose to be nasty and petty. The first option simply ensures the writer is explicitly aware of tone. The second option simply buys the writer time to cool off. Both possibilities nudge because they tweak the choice context without introducing new information or financial incentives and without coercing someone or banning something.

Critics could object that if the civility nudge becomes the default setting, there's no way to bypass it without hacking the system. But this constraint holds for other nudges, too. It is the basic problem of default settings. Without resorting to vandalism, you can't change the famous fly-etched urinal nudge that is designed to minimize spillage by giving men a target to aim at. Likewise, if a cafeteria adheres to a policy of arranging its display according to the nudge ideal of placing the fruit at eye level, law-abiding shoppers are stuck coping with the configuration.

The issue, then, is that nudges are supposed to follow an ethic of "libertarian paternalism" and be easy to opt-out of. If you don't want to use ToneCheck anymore, you can simply stop using the program. But if your boss forces you to use it to ensure compliance with company code, the program no longer functions as a nudge; it becomes a shove. So, while I don't think a 15 minute time-out imposes steep transaction costs, others might disagree and view a default civility nudge as the digital equivalent of a horrible boss. If this turns out to be a major concern, other options could be tried out -- perhaps making it a tool that individual web moderators can turn on or off, or perhaps making it an option for users to take advantage of, but only if they want.

I'm not sure if the civility nudge actually would help people politely discuss controversies. That's an empirical question. But it's a possibility worth considering and, hopefully, rationally weighing in on in the comments section here.

If you want to learn more about my take on nudging, see a previous Atlantic essay "Why It's Okay to Let Apps Make You a Better Person." If you can put up with journal imposed paywalls, check out articles I've written with collaborators Kyle Whyte, Arthur Caplan, and Jathan Sadowski: "Is There a Right Way to Nudge? The Practice and Ethics of Choice Architecture" (which clarifies why the proposal here might be a "fuzzy nudge"), "Nudge, nudge or shove, shove-the right way for nudges to increase the supply of donated cadaver organs," and "Nudging Cannot Solve Complex Policy Problems." I'm also a fan of the blog iNudgeyou.