When a Snuff Film Becomes Unavoidable

On Wednesday morning, thousands of Twitter and Facebook users watched the lives of two Americans end, without ever having a choice in the matter.

Dado Ruvic / Reuters

On Wednesday morning, two journalists in Roanoke, Virginia, were murdered on live television by a gunman. The two victims were a 24-year-old reporter, Alison Parker, and a 27-year-old cameraman, Adam Ward. The gunman was a “disgruntled former employee” of the TV station, according to Virginia Governor Terry McAuliffe. A suspect has since shot himself and died in police custody.

Two videos of the murders exist. The first was broadcast live, on TV, at the time of the killing. The second was taken by the gunman himself. He posted it to Twitter and to Facebook after the murder.

Both social media companies quickly suspended his accounts and removed the videos. For the 10 or 15 minutes before that, though, the videos circulated widely on both services as users shared them out of horror, confusion, or some other emotion.

In the past 12 months, both Twitter and Facebook have begun auto-playing videos when they appeared in a user’s feed. If a video comes across your feed, or you accidentally open it in a tab or tap a link on your phone, the video pops up and just starts playing. You do not have the option to figure out the video’s context, and choose whether to press play: On both Twitter and Facebook, the footage just starts rolling. Oftentimes, that video is an ad, so you close it or ignore it and go on with your life.

But on Wednesday, the video that was auto-playing in everyone’s feed showed the murder of two people. It’s impossible to tell how many people saw the video (though Facebook’s version of the video was shared 500 times before it was taken down), but user reports suggest that thousands and thousands of people witnessed—without being warned ahead of time or knowing what they were getting themselves into—a brief, vivid, and unmistakable snuff film.

Forcing thousands of people to view two deaths without warning or preparation causes real harm. For almost all viewers, of course, watching the video does not approach the anguish felt by the victims’s friends, families, or coworkers. But that the auto-playing incident was not the worst horror in a morning full of them doesn’t lessen the need to talk about it, to figure out what happened, and to prevent it from happening again.

When I asked Twitter for comment, it referred me to its media policies, specifically this section: “Media that is marked as containing sensitive content will have a warning message that a viewer must click through before viewing the media.” There is a brief period of time, though, between when a video is uploaded and when it’s tagged as sensitive, and many people saw the video during that gap today. I also asked Facebook for comment but haven’t heard back yet.

Twitter and Facebook were not the only venues showing video of the murder on Wednesday morning. CNN was showing the TV station’s version of the video once an hour. But that kind of viewing is different, I think, than the auto-playing mayhem that descended on Twitter this morning, because there was a warning before it. Except for someone changing the channel directly into the brief footage, a viewer would know what they were about to see and choose whether to watch it or not. I think, too, that the TV station’s version of the video was profoundly different than the murderer’s version, precisely because it was not filmed by the murderer.

There is some question as to whether media outlets should be showing these videos at all. In 2012, the sociologist Zeynep Tufekci wrote for The Atlantic about research suggesting that mass shootings, like teen suicides, are contagious: that by describing the specific method and setting of the killings, law enforcement and the media can prompt more of them. But while I don’t know that CNN is making the right choice to air the video, I do trust that they are thinking about it—that they are considering the airing of such a video as a meaningful act, one with possible benefits and consequences. I trust that they are thinking about it, in other words, editorially.

When Twitter debuted video auto-play earlier in June of this year, meanwhile, they talked about it as a technical improvement. “Rich media creatives will now auto-play in timelines and across Twitter,” said a company blog post, describing it as a “consistent, seamless and friction-free” change which would lead to “a more streamlined consumption experience.”

The problem is that Twitter, or Facebook, or any other platform for mostly unfiltered reality, should not necessarily provide such “seamless” and “friction-free” access to that reality. If the ambit of any social network is that people are better at managing what they want to see than old-school news editors are, then you actually have to let people choose what they want to see. And you have to understand, too, that when working on a website to which anyone can post, the technical choice to auto-play every video is a profoundly editorial one. Though individual users can deactivate auto-play (here are instructions), the feature supposes that most people will and should want to see every video that passes through their feeds.

I feel for the Twitter and Facebook employees who ordered, designed, and developed these features: Surely they didn’t anticipate that their workaday emails and meetings and server-architecture re-programmings would lead to thousands and thousands of people seeing a double murder.

But, as these features get reconsidered over the coming days, I hope they acknowledge that this incident was foreseeable. Since the feature debuted on Twitter in June, many people have pointed out that it auto-played all videos, including exceptionally violent ones. Those videos did not always reach the level of snuff films captured by a murderer, but they did show horrific violence against black people, often captured by a bystander or police-car dashboard camera:

These are not baseless or frivolous concerns. Columbia University’s Center for Journalism and Trauma has detailed recommendations for dealing with distressing or violent imagery.

“Traumatic imagery needs to be handled with care, as it can place the wellbeing of those who work with it at risk,” says the center’s guide:

From research, we know exposure to limited amounts of traumatic imagery is unlikely to cause more than passing distress in most cases; media workers are a highly resilient group. Nevertheless, the dangers of what psychologists call secondary or vicarious traumatization become significant in situations where the exposure is repeated—the slow drip effect.

The guide specifically recommends that film editors “avoid using the loop play function when trimming footage of violent attacks and point-of-death imagery.” A timeline full of the same awful, auto-playing moment constitutes its own kind of loop play, converting a place full of friends and followers into one of unavoidable distress and sadness. If social networks want users to stick around, they would do well to keep that in mind.