The Blue Check Mark’s Evil Cousin

On Clubhouse, a black badge was meant to identify trolls. It’s become an emblem of the app’s dysfunctional moderation system.

Illustration of a gavel, with the Clubhouse black badge on it
The Atlantic

To block someone on Facebook, Instagram, or Twitter is not, in the scheme of things, a big deal. You’ll no longer see them on the platform, they’ll no longer see you, and then you’ll both go on social networking, largely as you did before. Since your feed is made up of discrete posts personalized for you by an algorithm, blocking one person’s in particular can be a simple, unobtrusive action. It’s among the saving graces of a realm buffeted by bots and wracked with rancor.

But what if blocking didn’t, or couldn’t, work that way? The year-old social audio app Clubhouse is built on live group conversations: Everyone in a given “room”—a virtual space convened around a given topic—hears the same person speak at the same time, and that shared context forms the basis of all interactions. A conversation in which certain people’s voices are silenced for certain other people would be incoherent to all. So when Clubhouse went to develop its own blocking feature in the fall, in response to user outcries over rampant misogyny, anti-Semitism, and coronavirus misinformation, it had to come up with a new approach.

What it built was a blocking tool more potent, more consequential, and ultimately more contentious than that of any other social platform. It has likely played a role in Clubhouse’s growth, helping the app rocket to 10 million active users on iOS in just one year. And as Clubhouse finally launches on Android this week, it might also hold a key to the platform’s downfall.

When you block someone on Clubhouse, it doesn’t just affect communications between the two of you, as it would on Facebook or Twitter. Rather, it limits the way that person can communicate with others too. Once blocked, they can’t join or even see any room that you create, or in which you are speaking—which effectively blocks them for everyone else in that room. If you’re brought “onstage” from the audience to speak, anyone else in the audience whom you have blocked will be kept off the stage for as long as you’re up there. And if you’re a moderator of a room, you can block a speaker and boot them from the conversation in real time—even if they’re mid-sentence.

Imagine a live panel discussion in which each member of the panel has the power to cut the mic of any other member, at any moment and for any reason, and also the power to have that person dragged from the lecture hall by security. That’s roughly how blocking works on Clubhouse. This is not just a personal decision, but a social act, with implications for who can speak at what times and in what settings.

There’s even a visible emblem of this regime. When a user you don’t follow has been blocked by some unspecified number of people whom you do, that user’s profile will appear on your app with an ominous icon: a black shield with a white exclamation point. Clubhouse calls this feature a “shared block list.” Some users call the badge the “black check mark” (or even the “black mark of the damned”), as if it were an inversion of Twitter’s blue check mark, signaling notoriety instead of notability. While others can see it on a user’s profile, the user herself cannot—and most don’t even realize they’ve been marked until someone else breaks the news.

When I asked Clubhouse for comment, and to clarify the workings of its systems, the company replied with some basic information about how the block tool and the black badge work, but declined to make any executives or employees available for an interview.

Hephzibah Cruz of Newton, Massachusetts, told me that her profile sprouted the black badge last month. She’d just blocked several users with whom she’d sparred on a different platform, and they retaliated by blocking her and encouraging others to do the same. “Now when I enter certain rooms, I’m viewed as an unresolved threat,” Cruz said—a frustrating outcome for a user who had thought that the block feature would help her avoid conflict rather than spark it.

By providing individuals with the power to control speech not only for themselves but for one another—and to affix warning labels to one another’s profiles—Clubhouse has essentially created a system of self-moderation, in which the thorny questions of online speech are devolved to its users. That approach scales easily, limiting the need for Clubhouse employees to make top-down content decisions or adjudicate disputes. It can also help people feel safe when discussing sensitive topics, such as sex, race, and mental health, since it gives them a simple way to protect their conversations from trolls.

As a result, Clubhouse has become a haven not only for the tech jet set that formed its earliest base, but for Black, Indigenous, and many other marginalized communities. Contrary to the perception that most conversations there involve venture capitalists bashing journalists and cryptocurrency geeks pumping dogecoin, Clubhouse is kaleidoscopically diverse. On a recent evening, I came across rooms devoted to veganism, mental health, UFOs, dating, and Inuit throat singing (which is truly delightful, if you’re not familiar with it). The start-up was rewarded in April with a funding round that valued it at $4 billion. Its success has sent Facebook and Twitter scrambling to build copycats. Twitter released its version, Spaces, widely last week.

Now, however, Clubhouse faces a crucial juncture. In the weeks before its Android launch, its iOS downloads cratered, a warning that the app’s cachet had faded. “The Clubhouse Party Is Over,” Vanity Fair’s Nick Bilton declared last month, writing off the social audio experiment as one best suited to pandemic times. But the positive reception of Twitter Spaces suggests that Clubhouse’s problems may be more specific than timing.

In fact, some have come to see its block-centric moderation system as part of the problem. About a dozen highly active Clubhouse users interviewed for this story, all of them women and most of them women of color, said the block feature has created its own array of opportunities for abuse, tactical silencing, and intimidation. The targets, in many cases, are the same vulnerable groups the tool was meant to protect.

“The problem I have with the blocking is that it can also be weaponized,” said Shireen Mitchell, a Black entrepreneur and activist who founded the nonprofit Stop Online Violence Against Women. For instance, she’s seen vaccine conspiracy theorists block doctors on Clubhouse to avoid debunkings (a phenomenon documented earlier this year by Motherboard’s Kaylin Dodson), and misogynists block feminists who challenge their views. “They love the power of having someone up onstage that they can then dismiss and send back down,” Mitchell said.

Sarah Szalavitz, an entrepreneur and self-described “accountability architect” who researches online platforms, was an early adopter of Clubhouse, and remembers advocating for a blocking feature before it had one. “I argued that the block function should work in a way that was social,” she said, so that you could see which users had been blocked by others in your network. “I would like to know why my friends blocked people … If you’re blocking someone for hate and misinfo, I could see that and decide whether to block him for that reason as well.”

Instead, she said, Clubhouse made a block feature that carries sweeping implications for people’s ability to use the app. “On Twitter, the impact of blocking is that I can’t see your messages,” Szalavitz said. “On Clubhouse, I can’t participate in any conversation you’re having, or know about it. It doesn’t give anybody the actual choice, except that one person who gets to be the dictator.”

The first time I came across the black mark of the damned, it was affixed to the Clubhouse profile of Taylor Lorenz, a New York Times tech-culture reporter (and former Atlantic writer) who has clashed with prominent venture capitalists. Lorenz told me that she gets a lot of questions about the badge, and that even her own family members see it until they follow her. It’s easy to imagine some value in having an invisible, back-end mechanism that limits the reach of people who have been widely blocked on Clubhouse, akin to Twitter’s so-called shadowban, she said, but Clubhouse’s implementation creates a vicious cycle. A signal that someone has been blocked a lot “only encourages other people to block them,” Lorenz said.

Clubhouse’s blocking feature seems to concentrate power in the hands of the users who already have it. When new users sign up for the app, they’re provided with a list of suggested users to follow, one that disproportionately includes powerful and opinionated men in the tech industry. Because those people have such huge followings—the founders of Andreessen Horowitz, for example, have about 7 million followers between them—getting blocked by a few of them can keep you out of Clubhouse’s most popular and newsworthy rooms. It can also mean that the millions of people who follow them at sign-up will see a black badge on your profile.

But you don’t have to cross swords with Clubhouse’s big-name influencers to find its block feature frustrating. Two regular Clubhouse users I spoke with requested anonymity because they said they’ve already faced targeted harassment on the app and fear that speaking out could lead to more. One, a Black woman in her 20s who’s studying medicine, said she has been barred from rooms discussing vaccination in Black communities, because one influential anti-vaxxer who frequents those rooms blocked her. She also found herself abruptly shut out of a weekly WandaVision watch-party club that had become her favorite experience on the app, evidently because one member had blocked her.

The other, a Jewish woman in her 20s who works in tech, said the anti-Semitism she has encountered on Clubhouse is more vicious than any she’s seen on other platforms. Yet the blocking and reporting tools are used by the anti-Semites as often as they are used against them, she said. (Clubhouse announced last month that it had shut down a number of rooms over complaints of anti-Semitism.) She sorts people with the black badge on their profile into two groups. One is “legitimately awful people,” she said, and the other is “people who stand up to injustice, and then other people block them because they just want to be racist and homophobic in peace.” To a moderation system that lacks a source of a ground truth—unlike most enforcement actions on Facebook or Twitter, getting a black badge on Clubhouse or being blocked from a given room cannot be appealed to human moderators—those two groups are the same.

For all of the problems raised by Clubhouse’s moderation tools, it’s not obvious what the ideal fix for them would be. If you couldn’t keep people you had blocked from sharing a stage with you, the technologist pointed out, the implications might be just as ugly. For instance, rape victims could be stalked around the app by their rapists. How the company navigates these moderation problems matters, even if Clubhouse doesn’t become the next giant social platform. Its success has already inspired emulators that will have to balance similar tensions between grassroots and top-down moderation in a live-audio group format. Twitter Spaces, for its part, blocks people from a room or stage only if the host has them blocked; if another speaker has you blocked, that speaker will see a warning label by your name but you’ll still be allowed in the room and onstage.

Perhaps more important, Spaces doesn’t seem as committed as Clubhouse to an optimistic vision of what happens when strangers with very different worldviews debate sensitive topics online in real time. Instead of allowing every user to see every possible “space” (the equivalent of Clubhouse rooms) by default, it shows them only those created by the people they follow. That eliminates a lot of the serendipity of Clubhouse; I doubt I’d have ever stumbled upon the Inuit-throat-singing room on Twitter’s audio platform. But the same moderation model might hold up better in an environment where the interactions are built on preexisting relationships between the speakers and the audience. The same trade-offs apply in social videochat apps: Those built explicitly to connect strangers, such as Omegle and Chatroulette, have given rise to grotesque moderation problems, while those that connect people who already know one another, such as Zoom and Houseparty, are both less thrilling and less problematic. (Discord, which includes both private and public “servers,” falls somewhere in between.)

Almost every source I spoke with for this story said that their enthusiasm for Clubhouse has waned, and that they know formerly active users who have left—but not because the pandemic is ending. It’s because an app that seemed to offer such promise for talking about difficult issues with people from all walks of life has turned out to be as toxic and exhausting as all the others. For live social audio (and video) to work, Clubhouse and its rivals will have to weigh the magic of connecting strangers in real time against the horrors of the same. The block button may not be able to do that by itself.