Twitter's Famous Racist Problem

The social network risks losing the goodwill it built up during the Arab Spring.

Stringer / Reuters

Jack Dorsey, one of Twitter’s founders (and now its CEO), may have looked seated, but he was taking a spiritual victory lap. It was April 2012, after the Arab Spring but before Egypt’s military coup, and Charlie Rose had just asked whether Facebook or Twitter had had more impact on the recent “revolutionary events.”

Dorsey smiled and demurred. Both had an effect, he said, generously. But then he gestured at how Twitter is inherently more open, more democratic—and, thus, more powerful.

“Twitter is naturally public, it’s a public conversation,” he told Rose. “If you’re in the middle of Egypt, you can pick up a $5 cellphone, you can send a text, and it doesn’t just go out to your social network that you’ve built and maintained, it goes out to the whole world.”

He continued. “Anyone can see it at any point, and anyone can listen in. And we think that’s quite powerful. We think that has a massive, public effect on the globe.”

In the opening years of the 2010s, the upbeat American tech industry took on a new global prominence. Pushed by an equally upbeat American president and a State Department preaching “internet freedom,” technology CEOs happily asserted a kind of nonchalant techno-determinism in geopolitics. Many founders noted that while many local factors triggered the Arab uprisings, they were helped by the power and freedom of new technology.

Dorsey cast Twitter itself as a benign distributor of empathy and tolerance. “It creates more understanding, more empathy, for how people wake up and live their day and then go to sleep,” he said. “And if you have more empathy, that creates less contention, less conflict.”

* * *

Earlier this week, Twitter did not seem to be creating less contention. Leslie Jones, a star of the new Ghostbusters remake, found herself the target of hate and abuse on the service. Anonymous Twitter users barraged Jones, who is black, with vicious tweets insulting her and comparing her to apes and gorillas. Milo Yiannopoulos, the technology editor of the website Breitbart who tweets under the handle @nero, incited some of the abuse. He also shared images of tweets that looked like they had come from Jones’s account but were in fact made up.

Jones started by blocking the accounts abusing her, but—after they kept coming—she posted screenshots to publicize the harassment she was receiving, according to New York Magazine. She also repeatedly denied writing the fictitious tweets. The abuse continued for hours. By the end of the night, Jones announced she was leaving Twitter.

At first, Twitter declined to comment on the abuse, with a representative telling BuzzFeed that the company “[doesn’t] comment on individual accounts.” Late Monday night, a spokesperson clarified that “this type of abusive behavior is not permitted on Twitter, and we’ve taken action on many of the accounts reported to us by both Leslie and others.”

On Tuesday, the company permanently banned Yiannopoulos, who has led harassment campaigns in the past, and who has previously violated the company’s rules.

Mass campaigns of abuse and harassment have posed a problem for Twitter for years. When Robin Williams died in 2014, anonymous accounts sent tweets to his daughter’s account with images of dead bodies. She was forced to abandon the service. In December of that year, the company updated some of its user policies to try and handle the issue. It did not succeed, and while it has occasionally talked about the importance of tamping down mass-harassment campaigns, they have continued more or less unabated.

Jones’s harassment re-opens these wounds. The racial politics of the company’s initial nonchalance posed some especially pointed problems. As a company, Twitter is still run mostly by the kind of middle-aged white men who dominate the rest of the technology industry; as a community, Twitter is especially popular among younger people of color, especially African Americans. In its editorial section, “Moments,” Twitter frequently summarizes and profits off of inside jokes and hashtags created by its black users. To have the highly public experience of a prominent black female comedian ignored so blithely suggested that the company was happy to have those users, but would do little to support them.

That said, it was not ignored for long: Jack Dorsey reached out to Jones after she announced she was leaving the platform, asking her to direct message him. And Twitter replied to my questions about these issues with a lengthy statement:

People should be able to express diverse opinions and beliefs on Twitter. But no one deserves to be subjected to targeted abuse online, and our rules prohibit inciting or engaging in the targeted abuse or harassment of others. Over the past 48 hours in particular, we’ve seen an uptick in the number of accounts violating these policies and have taken enforcement actions against these accounts, ranging from warnings that also require the deletion of Tweets violating our policies to permanent suspension.

We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it's happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. We’ll provide more details on those changes in the coming weeks.

Permanently banning Yiannopoulos is the first step the company has taken, but it could go further, if it wanted to. Since people tweeting anonymously will just make a new account if someone blocks theirs, it could impose a strict one-phone-number-per-new-account rule. It could allow IP addresses to create only one or two new accounts per day. And it could consider expanding Twitter’s block feature, so that a user cannot @-mention someone who has blocked them.

The problem that Twitter faces isn’t an easy one to fix. There is no program or algorithm that can detect racism or malcontent. Even the best sentiment-analysis software falls down on the job, and banning slurs is too simple. But this is a burden that comes with the same publicness that Dorsey bragged about four years ago.

That’s why Twitter’s best option may not be technical at all: The company would benefit from having a team of professionally trained moderators working in its San Francisco headquarters. Led by someone who understands Twitter and public relations, this team could surveil the site for evidence of mass abuse and harassment campaigns and intervene quickly and accordingly. Working under strong leadership, and using clear and public guidelines, this team could thread the needle between corporate censorship and rampant abuse.

Whatever path it chooses, though, Twitter would do well to lean toward action. In 2011 and 2012, Twitter could claim benign influence essentially by sitting back and doing nothing. Four years later, though Twitter and other social networks continue to help organize protests and spread information among less powerful groups, they also continue to liberate those with darker motives. Donald Trump, the Republican candidate for president, has been able to spread racist and anti-Semitic messages on Twitter; and a loose collection of racists and neo-Nazis dubbing themselves the “alt-right” have chased people of color lesser known than Jones off the service. Racists and anti-Semites have been able to accrue followers on Twitter, permitting a major public resurgence of hate and abuse.

Despite repeated promises otherwise, Twitter has repeatedly failed to address the issue with the seriousness it requires. If it continues to do so, it will find itself just as implicated in this new political uprising as it was during the Arab Spring. Techno-determinism works both ways.