Facebook set a new land-speed record for situational irony this week, as it fired the people who kept up its “Trending Topics” feature and replaced them with an algorithm on Friday, only to find the algorithm promoting completely fake news on Sunday.

Rarely in recent tech history has a downsizing decision come back to bite the company so publicly and so quickly.

Facebook must now decide how best to run the Trending feature, which has been a headache since its human editors faced anti-conservative bias accusations this spring. The company has yet to respond to my multiple inquiries about this incident, but in a blog post on Friday announcing the change, it said: “A more algorithmically driven process allows us to scale Trending to cover more topics and make it available to more people globally over time.”

And late Monday evening, a Facebook vice president told CBS News that the promotion of the fake story was “a mistake for which we apologize, and it has been corrected.”

It’s difficult to know who to blame for Facebook’s mistake. On its face, the company’s decision to switch from human to algorithmic editors seems like a shirking of authority. The new Trending algorithm appears to work by promoting the most-discussed news topics to a place of prominence, no matter their global or editorial importance. It also caters to the kinds of stories that users appear to want to read.

According to its Friday blog post, it does this with only enough human oversight to prevent inaccurate stories from trending. Jonathan Zittrain, a professor of law and computer science at Harvard University, likened Facebook’s decision to use impersonal aglorithms to “confining things to the roulette wheel.”

“Even the casino isn’t supposed to know what number is going to win when it spins. And so, if there’s some issue, at least it isn’t intentional manipulation by Facebook,” he told me. Google claims the same innocence with its search results, he said.

But the murkiness of machine learning algorithms—it can sometimes become impossible to tell exactly why a program is making decisions—makes this kind of editorial abdication increasingly difficult to claim, he added. If an algorithm’s writers tune it to even slightly favor certain outcomes, then the program itself may start to regress to regrettable habits.

“The algorithm gets smart enough that, even if the casino isn’t looking to put a thumb on the scale, thumbs will appear,” he said. “This isn’t just Facebook’s problem—this is one of the profound problems of our time.”

In the case of Trending, Facebook said its human veracity-checkers messed up. But the company has an interest in keeping other people out of the picture: By firing anyone who made any other kind of editorial judgement, the company can assert that it remains only a technology company and not a media company. This is a rhetorical move with an ominous history: As Buzzfeed’s Charlie Warzel writes, the same claim allowed Twitter to ignore its culture of harassment, which now poses a major business threat to the company; and it permitted Uber to gain scale and skirt municipal oversight during its early years of explosive growth. (Facebook has hired journalists—and then thought better of it and dismissed them—before.)

But this prompts a second question: Even if algorithms are now running the show, is Facebook legally responsible for what happened over the weekend?

Let’s review the episode. For at least eight hours, Facebook promoted the topic “Megyn Kelly” because of a bogus but massively popular article claiming that Kelly, a Fox News anchor, had been fired from the network because she endorsed Hillary Clinton. It was totally wrong: She hadn’t, and she hasn’t.

Yet “the Trending review team accepted it thinking it was a real-world topic,” says Osofsky. It is unclear how this happened: The article was published by endingthefed.com, which is not a mainstream or particularly popular conservative outlet. Among the stories on its front page right now: “German Scientists Prove There is Life After Death.”

Thanks to Facebook’s help, the Kelly fabrication eventually racked up more than 200,000 likes. But here is a chicken-and-the-egg problem: As soon as a story starts “trending,” even if only several thousand people are talking about it, it immediately appears in front of millions of eyeballs. This brings it a lot of attention that it would otherwise never receive—especially now that Trending seems to favor certain URLs and not certain generic topics. (On Facebook’s desktop site, Trending Topics is featured in a righthand sidebar; on its mobile app, they populate after the user taps the search bar.)

In other words, even if the fake Megyn Kelly story was already popular, Facebook brought extra juice to it by trending it. Did it commit libel against her? Or did it liberate itself of liability when it let the roulette wheel handle everything?

“I think the direct answer to your whimsical question is that you’re not off the hook by just having made it random or by having it not literally come from your brain,” Zittrain told me. “It’s still entirely possible to defame someone. If The New York Times came out with a headline that was generated by a magic 8-ball, or by manatees pushing balls around, and they published it, they would still be defaming someone.”

So yes—tentatively. An algorithm could theoretically defame someone.

But Facebook isn’t on the hook for libel, because it has a get-out-of-jail-free card in the form of a small passage in a two-decade-old law: Section 230 of the Communications Decency Act of 1996.

“It basically says that you’re not responsible for words uttered by someone else online should you repeat them,” says Zittrain. Facebook, which only ever copied and pasted words from the EndingtheFed.com headline, is off the hook—at least in the United States.

Which is somewhat funny, because this was never quite the intention of the law. Before Congress passed the CDA in 1996, there was a general principle that the more aggressively someone edited something, the more responsible they were for it. A newspaper publisher might be responsible for libel published in a letter to the editor, but a bookstore wouldn’t be culpable for selling a defamatory book. You can see how this might backfire online: A blog or website that edited its comments section, for example, might be more on the hook for them than one that did not.

Section 230 removed that threat forever, and online free speech groups like the Electronic Frontier Foundation still salute it accordingly. But Zittrain added that he didn’t know if Facebook’s algorithmic excuse would cover someone publishing on paper.

Imagine a newspaper that picked which letters to the editor to run by randomly selecting them from a bag, he said. It could wind up unknowingly publishing libel—and the libel wouldn’t be any less illegal because of their grab-bag method. “I doubt a common law defamation court in 1995 would have much sympathy for them,” he said.