“[Tech companies] don’t have the infrastructure they need to scale to this size,” he said. He thinks these companies are simply underinvesting in moderation resources. In 2009, Farid worked with Microsoft to develop PhotoID, a hashing technique that detects and blocks the upload of photos and videos that depict child sexual abuse. “This type of hashing technology has been around for decades,” Farid said. “So the fact that they have not refined it and improved it to the point that it should be nearly perfect is, I think, inexcusable.” (In its blog post, Facebook emphasized its efforts to “identify the most effective policy and technical steps” to moderate this type of video, including its matching technology and combatting hate speech.)
For years, Silicon Valley has promised artificial intelligence as a long-term solution for both blocking extremist content and improving working conditions for moderators, who have had to view and block video of the shooting manually. AI, Rosen said at Facebook’s developer conference last year, could one day detect and hash banned material preemptively without human workers having to stare at hours of gruesome material. In congressional testimony last year, Mark Zuckerberg said he expects AI hashing to take over content moderation soon.
But the ability to train algorithms to accurately recognize extreme violence without anyone having to flag it in the first place is still very far off—even the most advanced AI can’t distinguish between a real shooting and a movie scene, Farid told me.
Read: Content moderators witness ‘the basic grossness of humans’
Ultimately, the use case for purely AI-driven content moderation is fairly narrow, says Daphne Keller, the director of intermediary liability at the Stanford Center for Internet and Society, because nuanced decisions are too complex to outsource to machines.
“If context does not matter at all, you can give it to a machine,” she told me. “But, if context does matter, which is the case for most things that are about newsworthy events, nobody has a piece of software that can replace humans.”
A preemptive AI filter would be best suited for, say, beheadings or child porn, where there’s never any legitimate use case and thus no need for human input. But the type of violence seen in New Zealand is different. Banning all footage would interrupt journalistic coverage and legitimate scholarship. In 2016, Facebook apologized after removing a Facebook Live video of the shooting death of Philando Castile. Though the graphic video shows the bloody aftermath of a shooting death, activists argued the video was powerful evidence of the need for police reform, and should therefore remain on the site.
“Ultimately, you need human judgment,” Keller said. “Or else, you need to make a different kind of decision and say, ‘Getting rid of this is so important, and sparing humans from trauma is so important, that we’re going to accept the error of having legal and important speech disappear.’”