In December 2012, an Icelandic woman named Thorlaug Agustsdottir discovered a Facebook group called “Men are better than women.” One image she found there, Thorlaug wrote to us this summer in an email, “was of a young woman naked chained to pipes or an oven in what looked like a concrete basement, all bruised and bloody. She looked with a horrible broken look at whoever was taking the pic of her curled up naked.” Thorlaug wrote an outraged post about it on her own Facebook page.
Before long, a user at “Men are better than women” posted an image of Thorlaug’s face, altered to appear bloody and bruised. Under the image, someone commented, “Women are like grass, they need to be beaten/cut regularly.” Another wrote: “You just need to be raped.” Thorlaug reported the image and comments to Facebook and requested that the site remove them.
“We reviewed the photo you reported,” came Facebook’s auto reply, “but found it does not violate Facebook’s Community Standards on hate speech, which includes posts or photos that attack a person based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability, or medical condition.”
Instead, the Facebook screeners labeled the content “Controversial Humor.” Thorlaug saw nothing funny about it. She worried the threats were real.
Some 50 other users sent their own requests on her behalf. All received the same reply. Eventually, on New Year’s Eve, Thorlaug called the local press, and the story spread from there. Only then was the image removed.
In January 2013, Wired published a critical account of Facebook’s response to these complaints. A company spokesman contacted the publication immediately to explain that Facebook screeners had mishandled the case, conceding that Thorlaug’s photo “should have been taken down when it was reported to us.” According to the spokesman, the company tries to address complaints about images on a case-by-case basis within 72 hours, but with millions of reports to review every day, “it’s not easy to keep up with requests.” The spokesman, anonymous to Wired readers, added, “We apologize for the mistake.”
* * *
If, as the communications philosopher Marshall McLuhan famously said, television brought the brutality of war into people’s living rooms, the Internet today is bringing violence against women out of it. Once largely hidden from view, this brutality is now being exposed in unprecedented ways. In the words of Anne Collier, co-director of ConnectSafely.org and co-chair of the Obama administration’s Online Safety and Technology Working Group, “We are in the middle of a global free speech experiment.” On the one hand, these online images and words are bringing awareness to a longstanding problem. On the other hand, the amplification of these ideas over social media networks is validating and spreading pathology.
We, the authors, have experienced both sides of the experiment firsthand. In 2012, Soraya, who had been reporting on gender and women’s rights, noticed that more and more of her readers were contacting her to ask for media attention and help with online threats. Many sent graphic images, and some included detailed police reports that had gone nowhere. A few sent videos of rapes in progress. When Soraya wrote about these topics, she received threats online. Catherine, meanwhile, received warnings to back up while reporting on the cover-up of a sexual assault.
All of this raised a series of troubling questions: Who’s proliferating this violent content? Who’s controlling its dissemination? Should someone be? In theory, social media companies are neutral platforms where users generate content and report content as equals. But, as in the physical world, some users are more equal than others. In other words, social media is more symptom than disease: A 2013 report from the World Health Organization called violence against women “a global health problem of epidemic proportion,” from domestic abuse, stalking, and street harassment to sex trafficking, rape, and murder. This epidemic is thriving in the petri dish of social media.
While some of the aggression against women online occurs between people who know one another, and is unquestionably illegal, most of it happens between strangers. Earlier this year, Pacific Standard published a long story by Amanda Hess about an online stalker who set up a Twitter account specifically to send her death threats.
Across websites and social media platforms, everyday sexist comments exist along a spectrum that also includes illicit sexual surveillance, “creepshots,” extortion, doxxing, stalking, malicious impersonation, threats, and rape videos and photographs. The explosive use of the Internet to conduct human trafficking also has a place on this spectrum, given that three-quarters of trafficked people are girls and women.
A report, “Misogyny on Twitter,” released by the research and policy organization Demos this June, found more than 6 million instances of the word “slut” or “whore” used in English on Twitter between December 26, 2013, and February 9, 2014. (The words “bitch” and “cunt” were not measured.) An estimated 20 percent of the misogyny study Tweets appeared, to researchers, to be threatening. An example: "@XXX @XXX You stupid ugly fucking slut I’ll go to your flat and cut your fucking head off you inbred whore."
A second Demos study showed that while male celebrities, female journalists, and male politicians face the highest likelihood of online hostility, women are significantly more likely to be targeted specifically because of their gender, and men are overwhelmingly those doing the harassing. For women of color, or members of the LGBT community, the harassment is amplified. “In my five years on Twitter, I’ve been called ‘nigger’ so many times that it barely registers as an insult anymore,” explains attorney and legal analyst Imani Gandy. “Let’s just say that my ‘nigger cunt’ cup runneth over.”
At this summer’s VidCon, an annual nationwide convention held in Southern California, women vloggers shared an astonishing number of examples. The violent threats posted beneath YouTube videos, they observed, are pushing women off of this and other platforms in disproportionate numbers. When Anita Sarkeesian launched a Kickstarter to help fund a feminist video series called Tropes vs. Women, she became the focus of a massive and violently misogynistic cybermob. Among the many forms of harassment she endured was a game where thousands of players “won” by virtually bludgeoning her face. In late August, she contacted the police and had to leave her home after she received a series of serious violent online threats.
Danielle Keats Citron, law professor at the University of Maryland and author of the recently released book Hate Crimes in Cyberspace, explained, “Time and time again, these women have no idea often who it is attacking them. A cybermob jumps on board, and one can imagine that the only thing the attackers know about the victim is that she’s female.” Looking at 1,606 cases of “revenge porn,” where explicit photographs are distributed without consent, Citron found that 90 percent of targets were women. Another study she cited found that 70 percent of female gamers chose to play as male characters rather than contend with sexual harassment.
This type of harassment also fills the comment sections of popular websites. In August, employees of the largely female-staffed website Jezebel published an open letter to the site’s parent company, Gawker, detailing the professional, physical, and emotional costs of having to look at the pornographic GIFs maliciously populating the site’s comments sections everyday. “It’s like playing whack-a-mole with a sociopathic Hydra,” they wrote, insisting that Gawker develop tools for blocking and tracking IP addresses. They added, “It’s impacting our ability to do our jobs.”
For some, the costs are higher. In 2010, 12-year-old Amanda Todd bared her chest while chatting online with a person who’d assured her that he was a boy, but was in fact a grown man with a history of pedophilia. For the next two years, Amanda and her mother, Carol Todd, were unable to stop anonymous users from posting that image on sexually explicit pages. A Facebook page, labeled “Controversial Humor,” used Amanda’s name and image—and the names and images of other girls—without consent. In October 2012, Amanda committed suicide, posting a YouTube video that explained her harassment and her decision. In April 2014, Dutch officials announced that they had arrested a 35-year-old man suspected to have used the Internet to extort dozens of girls, including Amanda, in Canada, the United Kingdom, and the United States. The suspect now faces charges of child pornography, extortion, criminal harassment, and Internet luring.
Almost immediately after Amanda shared her original image, altered versions appeared on pages, and videos proliferated. One of the pages was filled with pictures of naked pre-pubescent girls, encouraging them to drink bleach and die. While she appreciates the many online tributes honoring her daughter, Carol Todd is haunted by “suicide humor” and pornographic content now forever linked to her daughter’s image. There are web pages dedicated to what is now called “Todding.” One of them features a photograph of a young woman hanging.
Meanwhile, extortion of other victims continues. In an increasing number of countries, rapists are now filming their rapes on cell phones so they can blackmail victims out of reporting the crimes. In August, after a 16-year-old Indian girl was gang-raped, she explained, “I was afraid. While I was being raped, another man pointed a gun and recorded me with his cellphone camera. He said he will upload the film on the Net if I tell my family or the police.”
In Pakistan, the group Bytes for All—an organization that previously sued the government for censoring YouTube videos—released a study showing that social media and mobile tech are causing real harm to women in the country. Gul Bukhari, the report’s author, told Reuters, “These technologies are helping to increase violence against women, not just mirroring it.”
In June 2014, a 16-year-old girl named Jada was drugged and raped at a party in Texas. Partygoers posted a photo of her lying unconscious, one leg bent back. Soon, other Internet users had turned it into a meme, mocking her pose and using the hashtag #jadapose. Kasari Govender, executive director of the Vancouver-based West Coast Legal Education and Action Fund (LEAF), calls this kind of behavior “cybermisogyny.” “Cyberbullying,” she says, “has become this term that’s often thrown around with little understanding. We think it’s important to name the forces that are motivating this in order to figure out how to address it.”
In an unusually bold act, Jada responded by speaking publicly about her rape and the online abuse that followed. Supporters soon took to the Internet in her defense. “There’s no point in hiding,” she told a television reporter. “Everybody has already seen my face and my body, but that’s not what I am and who I am. I’m just angry.”
* * *
- 1
- 2
- 3
- 4
- Single Page


In Focus
Join the Discussion
After you comment, click Post. If you’re not already logged in you will be asked to log in or register.
blog comments powered by Disqus