In the summer of 2015, Greg, Rasheed, and a few of their peers started fighting back against racism on Twitter. They found people who used the n-word and gently admonished them, reminding them that they were harassing and hurting real people.

Which is ironic, since neither Greg nor Rasheed were real people themselves. They were bots.

They were the creations of Kevin Munger, a politics student at New York University. By programming a variety of Twitter bots to respond to racist abuse against black users, he showed that a simple one-tweet rebuke can actually reduce online racism. “I like to read this as optimistic,” he says. “It is possible to change people’s behavior and not just for a short amount of time.”

But there’s a catch: The rebukes only worked if they came from white people (or bots with white profile pictures) with lots of followers.

“There’s a reason why higher-status members of these communities bear a larger share of the responsibility for speaking out against racist or bigoted speech,” says Betsy Levy Paluck, a psychologist at Princeton University. “This isn’t just a moral judgment but an empirical regularity that’s been coming out of many research programs: People with higher status are influencing norms, and with that influence comes responsibility. If anyone says, I’m not a role model, that’s a wish, not a fact.”

Munger’s study comes amid a growing appreciation of Twitter’s serious problems with harassment. Earlier this year, comedian Leslie Jones, a star of the recent Ghostbusters remake, was inundated with horrifying, racist tweets.  She was the year’s most prominent victim of mass abuse, but far from its only one. As Charlie Warzel wrote on Buzzfeed, “Today, Twitter is a well-known hunting ground for women and people of color, who are targeted by neo-Nazis, racists, misogynists, and trolls, often just for showing up.” The problem has been acknowledged by Twitter’s CEO, and the company has today launched new tools designed to address  it, including the ability to mute certain conversations and to filter our chosen words or phrases.

Munger tried to do so by creating several bots. He gave them all the same profile information and male cartoon avatar, but he varied their skin color and names to make them identifiably white or black. He also gave them followers—either fewer than 10, or between 500 and 550, which he “bought from a sketchy website.” And he wrote fake tweets from their accounts so that no one would suspect that they weren’t real.

Next, he compiled a list of white men on Twitter who tweeted the n-word at other users, regularly and offensively. He then targeted each of these people with one of the various bots, who admonished them for their slurs. And he carefully chose words that were not aggressive, but would emphasize common humanity: “Hey man, just remember that there are real people who are hurt when you harass them with that kind of language.”

In the following months, Munger found that people reduced their use of racist language if they were sanctioned by the white bot with lots of followers—and only that bot. This wasn’t just a drive-by effect, either: It dwindled over time, but lasted for at least a month.

“It tracks with previous research,” says filmmaker and futurist David Dylan Thomas. For example, in 2014, 14-year-old Trisha Prabhu created an anti-cyberbullying app called Rethink which detects when people are writing hurtful comments and asks them if they’re sure they want to post it; most pull back. As Thomas says, “If you can make someone aware of the fact that what they’re doing has an impact, to disrupt the process of going ‘I have this emotion, I’m just going to post it,’ in some circumstances, it can have an effect on the frequency of posting hateful commentary.”

The approach is clearly scalable. You could imagine an army of bots, crawling through Twitter and Facebook and speaking out against hate speech.

“It seems weird to advocate for the use of bots to corral behavior, but it doesn’t have to be bots,” says Damien Williams, a philosopher at Kennesaw State University who has studied futurism and AI. “Someone like William Gibson with hundreds and thousands of followers taking the time to say, ‘Hey this isn’t acceptable, you’re hurting real people,’ that would have a major impact on a lot of people who see him as an important powerful figure in their group.”

More surprisingly, Munger also found that this approach was only effective against anonymous users. When harassers used their own names and photos, they weren’t affected by the influential white bot, and they lashed out with more vitriol when confronted by the black bot with few followers. The result flies against the common intuition that anonymous users are more likely to be terrible to each other. But Munger has an explanation.

When people choose anonymity, they downplay their individual identities and their group identities take over. That might make them more hostile towards others outside their group, but also more responsive to social pressures from people within the group. But when people claim their hate with their own name, a rebuke might just affirm their prejudices.

Such influence matters more than ever. In the week since the U.S. election, the Southern Poverty Law Center has collected more than 315 incidents of hateful harassment and intimidation across the country—roughly what they’d normally expect to see in a half-year period. And while institutions are gearing up to fight at large scales, through legislation and advocacy, the burden of addressing bigotry is also an everyday one.

None of this is to absolve companies like Twitter and Facebook of their responsibilities in fighting harassment on their own platforms. But absent such measures, there is clearly a lot that users can do—especially influential members of majority groups. This is a time for allies. It’s a time, in Paluck’s words, “to be an activist in conversations and moments.”

Or, as a writer from the Crunk Feminist Collective simply put it, “Get your people.”