A Tool to Delete Beheading Videos Before They Even Appear Online
The creators of a child-porn detection system want to block terrorist propaganda from news feeds—but social media companies aren’t convinced it’s a good idea.

A week before opening fire on an Orlando nightclub, Omar Mateen was on Facebook. Perhaps he was looking for inspiration: According to the chairman of the Senate Homeland Security Committee, Mateen searched the site for a speech by Abu Bakr al-Baghdadi, the secretive ISIS leader. On the day of the shooting, he allegedly pledged allegiance to al-Baghdadi in a Facebook post.
Once dependent on leaflets and videotapes, terrorist groups now use social media as a chief recruiting tool. Facebook, Twitter, and other social networks have responded by accelerating campaigns to shut down accounts that spew pro-ISIS messages on their platforms; Twitter in particular has seen some success suspending ISIS-affiliated accounts by expanding its teams that watch for objectionable content. But it’s impossible to keep terrorist propaganda off of social networking platforms entirely. Each day, Twitter users tweet an average of 500 million times and more than a billion people log into Facebook.
Instead of relying on humans to finger dangerous content, a computer-science professor at Dartmouth is proposing a system that proactively flags extremist photos, videos, and audio clips as they’re being posted online. But the software has been met with reluctance from the very social media companies that would use it, even before it was announced last week.
The project’s leader is Hany Farid, the chair of Dartmouth’s computer-science department. Farid’s was the mind behind PhotoDNA, a Microsoft-backed system that detects child pornography as it’s posted online. That service relies on a stock of millions of pornographic images of children collected and maintained by the National Center for Missing and Exploited Children, or NCMEC. PhotoDNA creates a unique fingerprint of each image called a hash, which can identify the photo even if it’s been manipulated or cropped. It’s used by a long list of major organizations and companies—from social networks and cloud-storage services to governments and law enforcement agencies—to prevent images of juvenile sexual abuse from spreading. Microsoft has made the service free and easy for any new online services to deploy.
When the service was introduced in 2008, Farid says, the bulk of the child pornography traded on the internet was in the form of images—but today, child porn is largely in video form. To catch up, Farid has spent the last eight months updating PhotoDNA to be able to flag video and audio files as well as images.
The expanded fingerprinting technology can help broaden the fight to suppress child pornography online—but Farid has long hoped to use it to attack another genre of offensive content.
He teamed up with the Counter Extremism Project, a nonprofit led by a star-studded roster of former government officials, to propose a sister program to PhotoDNA that would help online platforms keep extremist content off the internet. Modeled off the child-porn detection system, the new program would establish a central clearinghouse that would maintain a database of extremist content and distribute unique fingerprints of each photo, video, and audio file to the platforms that want to filter for this content. The clearinghouse would be called the National Office for Reporting Extremism, or NOREX.
Researchers at the Counter Extremism Project have been painstakingly collecting extremist content for years, even enlisting crowdsourced help from other social media users to point out offensive accounts. Mark Wallace, the organization’s CEO, proposes starting just by flagging the “worst of the worst” of extremist content: files like ISIS’s savage beheading videos, or audio and video footage of Anwar al-Awlaki’s propaganda speeches. Wallace has high hopes that impeding the spread will discourage terrorists from committing violent acts and hobble their propaganda machine.
“Imagine the circumstance if you are that ISIS propagandist, and you were thinking, ‘Well, is it really worthwhile torturing this poor soul and killing them on video? Because the moment I place that video online, it can’t go viral.’” he said. “I think that has the potential to be a quite consequential, perhaps game-changing effect on this recruitment and propagandizing.”
Wallace and Farid announced the program to a group of reporters last week, and indicated that it would be ready to deploy imminently. Farid said his software would be ready within months, and Wallace said that the project’s leaders had “very collegial discussions” with social-media companies about adopting the new software. “I don’t want to get too over my skis here, but I think there’s a lot of interest,” he said.
But to hear those companies tell it, the proposal is far from the brink of adoption. Although there have been months of conversations among the platforms that are most likely to use the software, lingering questions and a history of resentment toward Wallace and his organization have thrown up roadblocks.
The conversation began in earnest in late April, when Monika Bickert, Facebook’s head of global policy management, organized a conference call with social-media companies to discuss how to deal with terrorist material on their sites. According to company representatives familiar with the discussion, Bickert shared details about a handful of tools that would flag extremist content online. Although Bickert never mentioned CEP, Wallace, or Farid by name, one of the proposals she circulated was identical to the one CEP introduced Friday.
That conversation was polite and productive, but in private discussions, some participants raised concerns about the plan—and about working with Mark Wallace, who has long been a gadfly circling social-media companies and pushing them to police their newsfeeds and timelines for extremism.
The companies are mainly concerned about how to determine the terrorist content that would get flagged. While NCMEC’s database of child-porn hashes is made up of illegal images as defined by the law, it’s harder to establish exactly what constitutes extremist content. Many countries define it very broadly, using the label of “terrorist” to silence dissent or opposition.
The job of deciding what counts as extremist and what doesn’t would fall to NOREX. The images, videos, and audio clips that NOREX determines are extremist would be flagged on participating social media sites, regardless of the context they were posted in.
“Unlike child pornography—which is itself illegal—identifying certain types of speech requires context and intent,” said Lee Rowland, a senior staff attorney at the American Civil Liberties Union. “Algorithms are not good at determining context and tone like support or opposition, sarcasm or parody.”
(Wallace says the proposed system would have some sort of appeals process, whereby a user notified that his or her post was flagged could submit a counterclaim for human review. Farid also proposed exempting some accounts—those operated by media companies, for example—that could post extremist content without repercussion for educational or news purposes.)
Farid says the pushback from social-media companies is reminiscent of the reaction to his original PhotoDNA proposal in 2008. At the time, he said, a coalition of technology companies would “dutifully meet and wring their hands” on a monthly basis, but never acted. It wasn’t until Microsoft and Facebook adopted PhotoDNA that other technology companies began slowly to come on board.
This time around, Farid has little patience for what he considers excessive foot-dragging. “We’ve seen this pattern before, and I find it a little inexcusable,” he said.
Wallace, too, lashed out at the industry for its reticence to work with him and his organization. “I wish that certain social-media companies were as prickly about terrorists on their platforms as they are with a bipartisan group of former officials who ask the social media companies—politely—to refuse to host those same terrorists,” he said.
The companies do seem to share the ultimate goal of developing an algorithm to suppress terrorist content, and the group that arranged the April call is waiting for a written summary of the best options—likely including the CEP proposal—to be circulated. (Other proposals that may be up for consideration have yet to be announced.)
The idea has the backing of the White House, which has encouraged the participation of private companies in fighting extremism. “We welcome the launch of initiatives such as [this one] that enable companies to address terrorist activity on their platforms and better respond to the threat posed by terrorists' activities online,” said Lisa Monaco, President Obama’s top counterterrorism advisor. “The innovative private sector that created so many technologies our society enjoys today can also help create tools to limit terrorists from abusing these technologies in ways their creators never intended.”
It’s still too early to say if the eventual solution that social media companies adopt will be CEP’s. Technology representatives criticized last week’s announcement as premature, citing their own ongoing discussions—but after last week’s attack in Orlando, there may be a renewed push to act soon to make it harder for ISIS to inspire violence in the U.S.