How to Stop the Bullies

The angst and ire of teenagers is finding new, sometimes dangerous expression online—precipitating threats, fights, and a scourge of harrassment that parents and schools feel powerless to stop. The inside story of how experts at Facebook, computer scientists at MIT, and even members of the hacker collective Anonymous are hunting for solutions to an increasingly tricky problem.

With millions of reports a week, most processed in seconds—and with 2.5 billion pieces of content posted daily—no wonder complaints like Carbonella’s fall through the cracks. A Facebook spokesperson said that the site has been working on solutions to handle the volume of reports, while hiring “thousands of people” (though the company wouldn’t discuss the specific roles of these employees) and building tools to address misbehavior in other ways.

One idea is to improve the reporting process for users who spot content they don’t like. During my visit, I met with the engineer Arturo Bejar, who’d designed new flows, or sets of responses users get as they file a report. The idea behind this “social reporting” tool was to lay out a path for users to find help in the real world, encouraging them to reach out to people they know and trust—people who might understand the context of a negative post. “Our goal should be to help people solve the underlying problem in the offline world,” Bejar said. “Sure, we can take content down and warn the bully, but probably the most important thing is for the target to get the support they need.”

After my visit, Bejar started working with social scientists at Berkeley and Yale to further refine these response flows, giving kids new ways to assess and communicate their emotions. The researchers, who include Marc Brackett and Robin Stern of Yale, talked to focus groups of 13- and 14-year-olds and created scripted responses that first push kids to identify the type and intensity of the emotion they’re feeling, and then offer follow-up remedies depending on their answers. In January, during a presentation on the latest version of this tool, Stern explained that some of those follow-ups simply encourage reaching out to the person posting the objectionable material—who typically takes down the posts or photos if asked.

Dave Willner told me that Facebook did not yet, however, have an algorithm that could determine at the outset whether a post was meant to harass and disturb—and could perhaps head it off. This is hard. As Willner pointed out, context is everything when it comes to bullying, and context is maddeningly tricky and subjective.

When I asked whether they’d rather be suspended from school or from Facebook, most middle- and high-school students picked school.

One man looking to create such a tool—one that catches troublesome material before it gets posted—is Henry Lieberman, a computer scientist whose background is in artificial intelligence. In November, I took a trip to Boston to meet him at his office in MIT’s Media Lab. Lieberman looked like an older version of the Facebook employees: he was wearing sneakers and a baseball cap over longish gray curls. A couple years ago, a rash of news stories about bullying made him think back to his own misery in middle school, when he was a “fat kid with the nickname Hank the Tank.” (This is hard to imagine now, given Lieberman’s lean frame, but I took his word for it.) As a computer guy, he wondered whether cyberbullying would wreck social networking for teenagers in the way spam once threatened to kill e‑mail—through sheer overwhelming volume. He looked at the frustrating, sometimes fruitless process for logging complaints, and he could see why even tech-savvy adults like Carbonella would feel at a loss. He was also not impressed by the generic advice often doled out to young victims of cyberbullying. “ ‘Tell an adult. Don’t let it get you down’—it’s all too abstract and detached,” he told me. “How could you intervene in a way that’s more personal and specific, but on a large scale?”

To answer that question, Lieberman and his graduate students started analyzing thousands of YouTube comments on videos dealing with controversial topics, and about 1 million posts provided by the social-networking site Formspring that users or moderators had flagged for bullying. The MIT team’s first insight was that bullies aren’t particularly creative. Scrolling through the trove of insults, Lieberman and his students found that almost all of them fell under one (or more) of six categories: they were about appearance, intelligence, race, ethnicity, sexuality, or social acceptance and rejection. “People say there are an infinite number of ways to bully, but really, 95 percent of the posts were about those six topics,” Lieberman told me.


NowThisNews visits MIT’s Media Lab to interview Lieberman about his work.

 


Focusing accordingly, he and his graduate students built a “commonsense knowledge base” called BullySpace—essentially a repository of words and phrases that could be paired with an algorithm to comb through text and spot bullying situations. Yes, BullySpace can be used to recognize words like fat and slut (and all their text-speak misspellings), but also to determine when the use of common words varies from the norm in a way that suggests they’re meant to wound.

Lieberman gave me an example of the potential ambiguity BullySpace could pick up on: “You ate six hamburgers!” On its own, hamburger doesn’t flash cyberbullying—the word is neutral. “But the relationship between hamburger and six isn’t neutral,” Lieberman argued. BullySpace can parse that relationship. To an overweight kid, the message “You ate six hamburgers!” could easily be cruel. In other situations, it could be said with an admiring tone. BullySpace might be able to tell the difference based on context (perhaps by evaluating personal information that social-media users share) and could flag the comment for a human to look at.

BullySpace also relies on stereotypes. For example, to code for anti-gay taunts, Lieberman included in his knowledge base the fact that “Put on a wig and lipstick and be who you really are” is more likely to be an insult if directed at a boy. BullySpace understands that lipstick is more often used by girls; it also recognizes more than 200 other assertions based on stereotypes about gender and sexuality. Lieberman isn’t endorsing the stereotypes, of course: he’s harnessing them to make BullySpace smarter. Running data sets from the YouTube and Formspring posts through his algorithm, he found that BullySpace caught most of the insults flagged by human testers—about 80 percent. It missed the most indirect taunting, but from Lieberman’s point of view, that’s okay. At the moment, there’s nothing effective in place on the major social networks that screens for bullying before it occurs; a program that flags four out of five abusive posts would be a major advance.

Lieberman is most interested in catching the egregious instances of bullying and conflict that go destructively viral. So another of the tools he has created is a kind of air-traffic-control program for social-networking sites, with a dashboard that could show administrators where in the network an episode of bullying is turning into a pileup, with many users adding to a stream of comments—à la Let’s Start Drama. “Sites like Facebook and Formspring aren’t interested in every little incident, but they do care about the pileups,” Lieberman told me. “For example, the week before prom, every year, you can see a spike in bullying against LGBT kids. With our tool, you can analyze how that spreads—you can make an epidemiological map. And then the social-network site can target its limited resources. They can also trace the outbreak back to its source.” Lieberman’s dashboard could similarly track the escalation of an assault on one kid to the mounting threat of a gang war. That kind of data could be highly useful to schools and community groups as well as the sites themselves. (Lieberman is leery of seeing his program used in such a way that it would release the kids’ names beyond the social networks to real-world authorities, though plenty of teenagers have social-media profiles that are public or semipublic—meaning their behavior is as well.)

I know some principals and guidance counselors who would pay for this kind of information. The question is what to do with it. Lieberman doesn’t believe in being heavy-handed. “With spam, okay, you write the program to just automatically delete it,” he said. “But with bullying, we’re talking about free speech. We don’t want to censor kids, or ban them from a site.”

More effective, Lieberman thinks, are what he calls “ladders of reflection” (a term he borrowed from the philosopher Donald Schön). Think about the kid who posted “Because he’s a fag! ROTFL [rolling on the floor laughing]!!!” What if, when he pushed the button to submit, a box popped up saying “Waiting 60 seconds to post,” next to another box that read “I don’t want to post” and offered a big X to click on? Or what if the message read “That sounds harsh! Are you sure you want to send that?” Or what if it simply reminded the poster that his comment was about to go to thousands of people?

The superintendent at one school felt appreciative of Anonymous for intervening. “We would have never done anything if they hadn’t notified us,” he said.

Although Lieberman has had exploratory conversations about his idea with a few sites, none has yet deployed it. He has a separate project going with MTV, related to its Web and phone app called Over the Line?, which hosts user-submitted stories about questionable behavior, like sexting, and responses to those stories. Lieberman’s lab designed an algorithm that sorts the stories and then helps posters find others like them. The idea is that the kids posting will take comfort in having company, and in reading responses to other people’s similar struggles.

Lieberman would like to test how his algorithm could connect kids caught up in cyberbullying with guidance targeted to their particular situation. Instead of generic “tell an adult” advice, he’d like the victims of online pummeling to see alerts from social-networking sites designed like the keyword-specific ads Google sells on Gmail—except they would say things like “Wow! That sounds nasty! Click here for help.” Clicking would take the victims to a page that’s tailored to the problem they’re having—the more specific, the better. For example, a girl who is being taunted for posting a suggestive photo (or for refusing to) could read a synthesis of the research on sexual harassment, so she could better understand what it is, and learn about strategies for stopping it. Or a site could direct a kid who is being harassed about his sexuality to resources for starting a Gay-Straight Alliance at his school, since research suggests those groups act as a buffer against bullying and intimidation based on gender and sexuality. With the right support, a site could even use Lieberman’s program to offer kids the option of an IM chat with an adult. (Facebook already provides this kind of specific response when a suicidal post is reported. In those instances, the site sends an e-mail to the poster offering the chance to call the National Suicide Prevention Lifeline or chat online with one of its experts.)

Jump to comments
Presented by

Emily Bazelon is a senior editor at Slate and a Truman Capote fellow at Yale Law School.

Get Today's Top Stories in Your Inbox (preview)

CrossFit Versus Yoga: Choose a Side

How a workout becomes a social identity


Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

CrossFit Versus Yoga: Choose a Side

How a workout becomes a social identity

Video

Is Technology Making Us Better Storytellers?

The minds behind House of Cards and The Moth weigh in.

Video

A Short Film That Skewers Hollywood

A studio executive concocts an animated blockbuster. Who cares about the story?

Video

In Online Dating, Everyone's a Little Bit Racist

The co-founder of OKCupid shares findings from his analysis of millions of users' data.

Video

What Is a Sandwich?

We're overthinking sandwiches, so you don't have to.

Video

Let's Talk About Not Smoking

Why does smoking maintain its allure? James Hamblin seeks the wisdom of a cool person.

Writers

Up
Down

More in Technology

More back issues, Sept 1995 to present.

Just In