I’m going to confess an occasional habit of mine, which is petty, and which I would still enthusiastically recommend to anyone who frequently encounters trolls, Twitter eggs, or other unpleasant characters online.
Sometimes, instead of just ignoring a mean-spirited comment like I know I should, I type in the most cathartic response I can think of, take a screenshot, and then file that screenshot away in a little folder that I only revisit when I want to make my coworkers laugh.
I don’t actually send the response. I delete my silly comeback and move on with my life. For all the troll knows, I never saw the original message in the first place. The original message being something like the suggestion, in response to a piece I once wrote, that there should be a special holocaust just for women.
It’s bad out there, man!
We all know it by now. The internet, like the rest of the world, can be as gnarly as it is magical.
But there’s a sense lately that the lows have gotten lower, that the trolls who delight in chaos are newly invigorated and perhaps taking over all of the loveliest, most altruistic spaces on the web. There’s a real battle between good and evil going on. A new report by the Pew Research Center and Elon University’s Imagining the Internet Center suggests that technologists widely agree: The bad guys are winning.
Researchers surveyed more than 1,500 technologists and scholars about the forces shaping the way people interact with one another online. They asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?”
The vast majority of those surveyed—81 percent of them—said they expect the tone of online discourse will either stay the same or get worse in the next decade.
Not only that, but some of the spaces that will inevitably crop up to protect people from trolls may contribute to a new kind of “Potemkin internet,” pretty façades that hide the true lack of civility across the web, says Susan Etlinger, a technology industry analyst at the Altimeter Group, a market research firm.
“Cyberattacks, doxing, and trolling will continue, while social platforms, security experts, ethicists, and others will wrangle over the best ways to balance security and privacy, freedom of speech, and user protections. A great deal of this will happen in public view,” Etlinger told Pew. “The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor.”
Tor is software that enables people to browse and communicate online anonymously—so it’s used by people who want to cover their tracks from government surveillance, those who want to access the dark web, trolls, whistleblowers, and others.
“Of course, this is already happening, just out of sight of most of us,” Etlinger said, referring to the use of hidden channels online. “The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”
The uncomfortable truth is that humans like trolling. It’s easy for people to stay anonymous while they harass, pester, and bully other people online—and it’s hard for platforms to design systems to stop them. Hard for two reasons: One, because of the “ever-expanding scale of internet discourse and its accelerating complexity,” as Pew puts it. And, two, because technology companies seem to have little incentive to solve this problem for people.
“Very often, hate, anxiety, and anger drive participation with the platform,” said Frank Pasquale, a law professor at the University of Maryland, in the report. “Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”
News organizations, which once set the tone for civic discourse, have less cultural importance than they once did. The rise of formats like cable news—where so much programming involves people shouting at one another—and talk radio are clear departures from a once-higher standard of discourse in professional media. Few news organizations are stewards for civilized discourse in their own comment sections, which sends mixed messages to people about what’s considered acceptable. And then, of course, social media platforms like Facebook and Twitter serve as the new public square.
“Facebook adjusts its algorithm to provide a kind of quality—relevance for individuals,” said Andrew Nachison, the founder of We Media, in his response to Pew. “But that’s really a ruse to optimize for quantity. The more we come back, the more money they make... So the shouting match goes on.”
The resounding message in the Pew report is this: There’s no way the problem in public discourse is going to solve itself. “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day,” said Randy Bush, a research fellow at Internet Initiative Japan, in his response to Pew.
Many of those polled said that we’re now witnessing the emergence of “flame wars and strategic manipulation” that will only get worse. This goes beyond obnoxious comments, or Donald Trump’s tweets, or even targeted harassment. Instead, we’ve entered the realm of “weaponized narrative” as a 21st-century battle space, as the authors of a recent Defense One essay put it. And just like other battle spaces, humans will need to develop specialized technology for the fight ahead.
Researchers have already used technology to begin to understand what they’re up against. Earlier this month, a team of computer scientists from Stanford University and Cornell University wrote about how they used machine-learning algorithms to forecast whether a person was likely to start trolling. Using their algorithm to analyze a person’s mood and the context of the discussion they were in, the researchers got it right 80 percent of the time.
They learned that being in a bad mood makes a person more likely to troll, and that trolling is most frequent late at night (and least frequent in the morning). They also tracked the propensity for trolling behavior to spread. When the first comment in a thread is written by a troll—a nebulous term, but let’s go with it—then it’s twice as likely that additional trolls will chime in compared with a conversation that’s not led by a troll to start, the researchers found. On top of that, the more troll comments there are in a discussion, the more likely it is that participants will start trolling in other, unrelated threads.
“A single troll comment in a discussion—perhaps written by a person who woke up on the wrong side of the bed—can lead to worse moods among other participants, and even more troll comments elsewhere,” the Stanford and Cornell researchers wrote. “As this negative behavior continues to propagate, trolling can end up becoming the norm in communities if left unchecked.”
Using technology to understand when and why people troll is essential, but many people agree that the scale of the problem requires technological solutions. Stopping trolls isn’t as simple as creating spaces that prevent anonymity, many of those surveyed told Pew, because doing so also enables “governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech, and shape social debate,” Pew wrote.
“One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” Bailey Poland, the author of “Haters: Harassment, Abuse, and Violence Online,” told Pew. Pseudonymity may be one useful approach—so that someone’s offline identity is concealed, but their behavior in a certain forum over time can be analyzed in response to allegations of harassment. Machines can help, too: Chatbots, filters, and other algorithmic tools can complement human efforts. But they’ll also complicate things.
“When chatbots start running amok—targeting individuals with hate speech—how will we define ‘speech’?” said Amy Webb, the CEO of the Future Today Institute, in her response to Pew. “At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”
Another challenge is that no matter what solutions people devise to fight trolls, the trolls will fight back. Even among those who are optimistic that the trolls can be beaten back, and that civic discourse will prevail online, there are myriad unknowns ahead.
“Online discourse is new, relative to the history of communication,” said Ryan Sweeney, the director of analytics at Ignite Social Media, in his response to the survey. “Technological evolution has surpassed the evolution of civil discourse. We’ll catch up eventually. I hope. We are in a defining time.”