In the early 1990s, emerging digital technologies created a quandary. Online public forums, on which users are able to post whatever they’d like, were one of the earliest and most exciting applications of digital networks. But hosting such a forum was arguably akin to a newspaper publishing a Letters to the Editor page without bothering to read the letters, which would be a prescription for legal catastrophe.
In two landmark cases, courts began to grapple with the issue. In 1991, a federal court ruled that the online service CompuServe was a mere distributor, rather than a publisher, of the material that it hosted, and so was not liable for its content. Its competitor Prodigy, however, was deemed to be liable in a New York state court ruling four years later, because Prodigy moderated user forums. By acting as an editor and not a mere conduit, the court reasoned, Prodigy made itself a publisher rather than a distributor.
In 1996, Congress passed the Communications Decency Act, a law meant to crack down on digital smut. From a decency perspective, the legal standard that had emerged from the CompuServe and Prodigy lawsuits seemed, well, perverse. Prodigy was liable because it had tried to do the right thing; CompuServe was immune because it had not. So Section 230 of the act stipulated that providers of internet forums would not be liable for user-posted speech, even if they selectively censored some material.
Much of the Communications Decency Act was quickly struck down by the Supreme Court, but Section 230 survived. It quietly reshaped our world. Courts interpreted the law as giving internet services a so-called safe harbor from liability for almost anything involving user-generated material. The Electronic Frontier Foundation describes Section 230 as “one of the most valuable tools for protecting freedom of expression and innovation on the Internet.” The internet predated the law. Yet the legal scholar Jeff Kosseff describes the core of Section 230 as “the twenty-six words that created the internet,” because without it, the firms that dominate the internet as we have come to know it could not exist. Maybe that would be a good thing.
Services such as Facebook, Twitter, and YouTube are not mere distributors. They make choices that shape what we see. Some posts are circulated widely. Others are suppressed. We are surveilled, and ads are targeted into our feeds. Without Section 230 protections, these firms would be publishers, liable for all the obscenity, defamation, threats, and abuse that the worst of their users might post. They would face a bitter, perhaps existential, dilemma. They are advertising businesses, reliant on reader clicks. A moderation algorithm that erred on the side of a lawyer’s caution would catch too much in its net, leaving posters angrily muzzled and readers with nothing more provocative than cat pics to scroll through. An algorithm that erred the other way would open a floodgate of lawsuits.