Mark Zuckerberg signaled today that Facebook might be taking a different approach to so-called deepfakes, synthetic videos created through the use of artificial intelligence, than it has toward other kinds of misinformation.
When it comes to misinformation or inaccuracy, Facebook has given its users a lot of room to make false statements without having their posts taken down. But deepfakes might be different.
“There is a question of whether deepfakes are actually just a completely different category of thing from normal false statements overall,” he told the Harvard legal scholar Cass Sunstein at the Aspen Ideas Festival, co-hosted by the Aspen Institute and The Atlantic, “and I think there is a very good case that they are.”
Deepfakes have inspired much consternation over their potential to destabilize public discourse. Using new computing techniques, deepfakes can create quite realistic and convincing video and audio of people saying and doing things that they have not. A group of artists, for example, created one of Mark Zuckerberg himself trashing his company. Facebook decided to leave it up.
Zuckerberg today, however, said that Facebook is working through its “policy process” to figure out what to do about deepfakes. He offered that the problem, from his perspective, was that any definition had to be precisely scoped.
“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg asked. “I think that’s probably a pretty reasonable definition.”
It is also a noticeably narrow definition. For example, Facebook recently came under fire for its decision to leave up a Nancy Pelosi video that had been slowed down to make her appear drug-impaired or otherwise cognitively unsound. It didn’t use AI at all, but merely traditional (and quite basic) editing techniques.
While the Pelosi controversy was clearly in the background, Zuckerberg’s stated rationale for his definition was to prevent an explosion of takedowns that could result from too broad a definition.
“If [our deepfake definition] is any video that is cut in a way that someone thinks is misleading, well, I know a lot of people who have done TV interviews that have been cut in ways they didn’t like, that they thought changed the definition or meaning of what they were trying to say,” he said. “I think you want to make sure you are scoping this carefully enough that you’re not giving people the grounds or precedent to argue that things that they don’t like, or changed the meaning somewhat of what they said in an interview, get taken down.”
Which, if you consider the number of times that someone claims to have been misquoted or misrepresented by a journalist, is probably a legitimate fear.
Sunstein pushed for a broader definition of what kind of video Facebook should not allow and explicitly referenced the Pelosi video.
Zuckerberg described the problem with Facebook’s response as primarily one of “execution.” He said it took the company’s systems “more than a day” to flag the video as potentially misleading. Outside fact-checkers confirmed that in an hour, but over that day, it achieved large-scale distribution. Zuckerberg’s preferred vision would have been for the video to have stayed up but have been flagged immediately, thereby greatly limiting its distribution. “What we want to be doing is improving execution,” Zuckerberg said, “but I do not think we want to go so far toward saying a private company prevented you from saying something that it thinks is factually incorrect.”
That was in line with Zuckerberg’s other comments this afternoon, in which he repeatedly called for regulation to settle “fundamental trade-offs in values that I don’t think people want private companies to be making by themselves.”
Until the time that regulation comes, however, Zuckerberg said his company is working toward creating the best systems of governance it can. And he noted that Facebook is spending more money on content review and safety than the company’s revenue when it IPO’d. That suggests a spending rate of roughly a billion dollars a quarter.
And it was this spending, and the new (clearly still imperfect) infrastructure that it has created, that Zuckerberg used to defend his company from the renewed calls to break up Facebook. “On election integrity or content systems, we have an ability because we’re a successful company and large to be able to go build these systems that are unprecedented,” he said.
Not all problems seem to be solvable by scale, however. Earlier in the interview, when asked about foreign intervention in America’s elections, Zuckerberg reeled off a list of new Facebook policies, but then ultimately punted. “That’s above my pay grade,” he said.
This article is part of “The Speech Wars,” a project supported by the Charles Koch Foundation, the Reporters Committee for the Freedom of the Press, and the Fetzer Institute.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.