Reuters / Stringer

Last month, the journalist Stephen Elliott filed a lawsuit against Moira Donegan. A year before, Donegan had created a shared Google spreadsheet that she titled “Shitty Media Men.” Then she invited women in her industry to add names to it; Elliott was among those named on the list. In “A Lawsuit Tests the Limits of Anonymous Speech,” I probed one question raised by the defamation case: Should Elliott be able to force Google to help him learn who had accused him of rape?

But the case also raises another, surprisingly complicated legal question to explore: Is the spreadsheet’s creator, Donegan, liable for any defamatory accusations made after she circulated the document among media women?

For as long as Americans have been going online in mass numbers, the courts have been trying to figure out who is liable if and when someone is defamed there. It is easy enough to map centuries of legal precedent onto cases in which a newspaper or magazine publishes an article on its website. But no case law maps perfectly onto a defendant who creates a shared Google document and chooses settings that fall somewhere between private and public, sharing it with people who can edit its contents and forward it. “I asked Floyd Abrams, a leading First Amendment expert, about this case,” Bari Weiss wrote, “and he said he had never seen one like it.”

Still, the courts will have to apply existing case law as best they can. And the best way to grasp what lies ahead is to explore the relevant precedents and history.

Cubby v. CompuServe

Digital life was very different in 1990. The biggest online service provider in the United States was called CompuServe. Its subscribers could chat with one another, play games, search a virtual library of information, and access more than 150 special-interest forums, including one on journalism. That forum contained many things, including a daily newsletter called Rumorville USA. It focused on the world of broadcast news. And it was written by a guy named Don Fitzpatrick, who was paid by an outside vendor and who had agreed in his contract that he was solely responsible for the information that was published.

One day, Fitzpatrick heard that another guy was developing a competing enterprise on the very same beat. Argh! He wrote on Rumorville USA that the start-up was a “scam,” that his new rival had been “bounced” from a previous job, and that he obtained content unethically.

The rival sued CompuServe for defamation.

Wait just a minute, CompuServe responded—even if this guy did defame his rival, we didn’t publish his words; we merely distributed them.

A federal appeals court agreed. “CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so,” it reasoned.

Noting that “while CompuServe may decline to carry a given publication altogether, in reality, once it does decide to carry a publication, it will have little or no editorial control over that publication's contents,” the court foresaw how a conclusion other than the one it reached “would impose an undue burden on the free flow of information.” So long as CompuServe didn’t know or have any reason to know of the allegedly defamatory statements, the company was in the clear.

Stratton Oakmont, Inc. v. Prodigy

By October 23, 1994, Bill Clinton was in the White House, Boyz II Men were on the radio, and 2 million people or so were thought to have posted to a popular online bulletin board called “Money Talks” that was available to users of a successful online service provider called Prodigy.

That day, an anonymous user began posting on the bulletin board that an investment-banking firm and its president, both of whom he named, had “committed criminal and fraudulent acts” in connection with a stock offering. It was a “major criminal fraud,” he wrote, and it was being carried out by a “cult of brokers who either lie for a living or get fired.”

The target of those remarks sued Prodigy for defamation.

Wait just a minute, Prodigy responded. This was settled in the CompuServe case. Like them, we’re a distributor, not a publisher. Dismiss this!

The New York Supreme Court ultimately disagreed. Unlike CompuServe, where there were no efforts to review the content, Prodigy took some steps on behalf of its users. It had software that screened for profanity, an emergency-delete function, and language advising its bulletin-board users that it would remove “notes that harass other members or are deemed to be in bad taste or grossly repugnant to community standards, or are deemed harmful to maintaining a harmonious online community” when they were brought to its attention.

Of course, far too many total posts went up to review them all. But the service did what it could. As if to show that no good deed goes unpunished, the court in the case reasoned that “PRODIGY’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.”

Yikes, many thought, this case is going to cause everyone online to abdicate control over their platforms for fear that efforts to at least remove the worst stuff they see, as best they can, will open them up to more liability.

The Communications Decency Act

Around that time, Congress was trying to pass a law to regulate obscenity and indecency online. It passed the Communications Decency Act, but the law didn’t last long. It was struck down for violating the First Amendment. Only an amendment to the legislation survived.

Section 230 had been added by then-Representatives Ron Wyden and Chris Cox, who were alarmed by the precedent they feared the Prodigy case would set.

In their view, it was fine for a publisher who printed something unlawful, like an article with a defamatory claim, to be held liable. But what about comments left by a reader beneath a digital article? Or posts on a site like Craigslist? Had platforms been strictly liable for anything illegal a user wrote, it would’ve made much of today’s internet impossible. Imagine if Facebook, Reddit, or Twitter was legally liable for every bit of content their hundreds of millions of users posted.

Foreseeing that, Section 230 held that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That brings us back to the media-men list. Some observers are confident that its creator qualifies for Section 230 immunity, and that the defamation lawsuit against her will be thrown out on those grounds.

In Techdirt, for example, Cathy Gellis writes:

In this case, the progenitor of the Google doc was an intermediary enabling other people to express themselves through the online service—in this case, the Google doc—she provided. Section 230 allows that intermediaries can come in all sorts of shapes and sizes, because its immunity is provided broadly, to any provider of an “interactive computer service,” which is “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” That’s what Donegan did with her Google doc: provide access to software to multiple users.

If anything is wrong with the content they contributed through this service, then they can be held responsible for it. But per Section 230, not Donegan.

Analysis of that sort may well spare Donegan from liability, assuming that she didn’t solicit, write, or substantively edit any defamatory accusations. But it’s premature to conclude that she’ll qualify for immunity absent more detailed information about exactly what role she played, if any, beyond creating the document, or if her case will be affected by any of the ways that Section 230 has been narrowed by the courts.

The attorney Ken White of Brown White & Osborn, an expert on issues related to freedom of expression, told me that a Ninth Circuit case involving roommates.com was among those likely to be relevant. The majority opinion in that case begins, “We plumb the depths of the immunity provided by section 230 of the Communications Decency Act.”

Fair Housing Council of San Fernando Valley v. Roommates.com

Roommates.com helps people find roommates. The website was accused of violating laws that forbid housing discrimination based on protected characteristics. The Ninth Circuit’s ruling describes how subscribers had to create a profile before they could search listings or post rooms:

In addition to requesting basic information—such as name, location and email address—Roommate requires each subscriber to disclose his sex, sexual orientation and whether he would bring children to a household.

Each subscriber must also describe his preferences in roommates with respect to the same three criteria: sex, sexual orientation and whether they will bring children to the household. The site also encourages subscribers to provide “Additional Comments” describing themselves and their desired roommate in an open-ended essay.

Roommates was not liable for what its users wrote in the blank field labeled “additional comments.” The ruling affirmed its Section 230 immunity.

But the court did hold Roommates responsible for what it called its “own acts,” which is to say, “posting the questionnaire and requiring answers to it.” By requiring subscribers to provide information to use the service, “and by providing a limited set of pre-populated answers, Roommate becomes more than a passive transmitter of information provided by others; it becomes the developer, at least in part, of that information.”

The site was not liable if a user, unbidden by its design, independently expressed a housing preference that was illegal, but Section 230 “does not grant immunity for inducing third parties to express illegal preferences.”

Is the media-men spreadsheet more closely analogous to the “additional comments” field, where unlawful expression could not be blamed on the creator of the forum? Or was it more like the section of Roommates.com that limited the options of users in a way that played a part in inducing the unlawful content they subsequently contributed?

Later in Roommates, the court writes:

If an individual uses an ordinary search engine to query for a “white roommate,” the search engine has not contributed to any alleged unlawfulness in the individual’s conduct; providing neutral tools to carry out what may be unlawful or illicit searches does not amount to “development.”

In contrast, the opinion adds, Roommates.com “does not merely provide a framework that could be utilized for proper or improper purposes; rather, Roommates’ work in developing the discriminatory questions, discriminatory answers and discriminatory search mechanism is directly related to the alleged illegality of the site … It is being sued for the predictable consequences of creating a website designed to solicit and enforce housing preferences that are alleged to be illegal.”

Is the media-men spreadsheet more like a neutral tool that could be used by unknown contributors to levy lawful or unlawful allegations, suggesting that its creator is exempt from liability under Section 230? Or did guidelines provided by its creator relate directly to the alleged illegality? As yet, it is unclear what its creator told women, if anything, when inviting them to edit the sheet, other than the instructions on the sheet itself:

  • “DISCLAIMER: This document is only a collection of misconduct allegations and rumors. Take everything with a grain of salt. If you see a man you’re friends with, don’t freak out.”
  • “Men accused of physical sexual violence by multiple women are highlighted in red.”
  • “**You can edit anonymously by logging out of your gmail.** Please never name an accuser, and please never share this document with a man. Please don’t remove highlights or names.”

Does that disclaimer suggest nothing more than a creator trying to be responsible by noting the context and limits of a neutral forum? Does the word rumors instead constitute a nudge toward including unverified allegations that predictably led to allegedly defamatory statements? How about the admonition to refrain from removing any allegations? I don’t know what a court would or should conclude.

Batzel v. Smith

The court went on to clarify two of its previous rulings on the scope of Section 230 immunity. In Batzel v. Smith, the editor of an email newsletter received “a tip about artwork which the tipster falsely alleged to be stolen.” He wrote a header and put the tip in his next newsletter.

The artwork’s owner sued.

“Our opinion is entirely consistent with that part of Batzel which holds that an editor’s minor changes to the spelling, grammar and length of third-party content do not strip him of section 230 immunity,” the Roommates court wrote. “None of those changes contributed to the libelousness of the message, so they do not add up to ‘development’ as we interpret the term.”

In Donegan’s New York essay on why she created the document, she writes:

Over the course of the evening, the spreadsheet expanded further: Many of the incidents reported there were physical, but there were also accounts of repeated sexual remarks, persistent inappropriate passes, unsolicited drunken messages. There was an understanding of the ways that these less-grave incidents can sometimes be harbingers of more aggressive actions to come, and how they can accrue into soured relationships and hostile environments. For clarity, I imposed a system that visibly distinguished violent accusations from others: Once a man had been accused of physical sexual assault by more than one woman, his name was highlighted in red.

No one confused a crude remark for a rape, and efforts were made to contextualize the incidents with notes — a spreadsheet allows for all of this information to be organized and included. But the premise was accepted that all of these behaviors were things that might make someone uncomfortable and that individuals should be able to choose for themselves what behavior they could tolerate and what they would rather avoid.

Donegan’s account leaves many questions ambiguously unresolved. Imposed a system on whom? Something akin to editors? All anonymous contributors? And who did the highlighting? Was the highlighting akin to edits that don’t contribute to the alleged libelousness of the message, since they add no new information, at least in an entry that already says “multiple rape allegations” right there for all to see? Or does the imposition of the system constitute something akin to “development,” stripping its creator of her immunity? For example, does the red highlighting imply that some editor verified that multiple women entered allegations against a given man, as opposed to one anonymous contributor trying to create that false impression?

Another wrinkle from the opinion:

If the tipster tendered the material for posting online, then the editor’s job was, essentially, to determine whether or not to prevent its posting—precisely the kind of activity for which Section 230 was meant to provide immunity. And any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under Section 230. But if the editor publishes material that he does not believe was tendered to him for posting online, then he is the one making the affirmative decision to publish, and so he contributes materially to its allegedly unlawful dissemination. He is thus deemed a developer and not entitled to CDA immunity.

That would seem to distinguish a hypothetical in which Donegan was emailed two allegations against Elliott and posted them on the sheet—a fact pattern that is not inconsistent with Section 230 immunity—from one in which she was emailed one allegation for posting, heard about another allegation of rape against the same person by someone who didn’t submit it, and then highlighted the entry to note multiple rape allegations. I highlight those hypothetical scenarios not because I have any reason to believe either happened, but to underscore that the legal outcome of the case may hinge on relatively small variations in the fact pattern that many would regard as morally indistinguishable from one another.

Carafano v. Metrosplash.com, Inc.

In the other case that the Ninth Circuit revisited, an anonymous person logged on to a dating site and created a fake profile impersonating the actress Christianne Carafano, making it appear that her sexual tastes were unconventional. The actress sued the dating site for publishing the profile.

The site claimed immunity under Section 230.

The court ruled that the libelous content “was created and developed entirely by the malevolent user, without prompting or help from the website operator;” that the site provided “neutral tools, which the anonymous dastard used to publish the libel,” but did “absolutely nothing to encourage the posting of defamatory content—indeed, the defamatory posting was contrary to the website’s express policies. The claim against the website was, in effect, that it failed to review each user-created profile to ensure that it wasn’t defamatory. That is precisely the kind of activity for which Congress intended to grant absolution.”

The opinion continued:

The salient fact in Carafano was that the website’s classifications of user characteristics did absolutely nothing to enhance the defamatory sting of the message, to encourage defamation or to make defamation easier: The site provided neutral tools specifically designed to match romantic partners depending on their voluntary inputs.

Do the highlights on the media-men list do anything to “enhance the defamatory sting of the message,” or do they merely reflect information that could be gleaned from the document in their absence? Did the document’s admonition to refrain from deleting any entries “make defamation easier” by removing a check on those allegations that struck other users as suspect, or did it merely encourage the preservation of the neutral tool’s viability as a place where lawful allegations could be recorded and later redound to benefit others?

One final noteworthy passage in the Roommates opinion will delight those who believe that Donegan ought to enjoy immunity under Section 230:

Websites are complicated enterprises, and there will always be close cases where a clever lawyer could argue that something the website operator did encouraged the illegality. Such close cases, we believe, must be resolved in favor of immunity, lest we cut the heart out of Section 230 by forcing websites to face death by ten thousand duck-bites, fighting off claims that they promoted or encouraged—or at least tacitly assented to—the illegality of third parties. Where it is very clear that the website directly participates in developing the alleged illegality—as it is clear here with respect to Roommate’s questions, answers and the resulting profile pages—immunity will be lost.

But in cases of enhancement by implication or development by inference—such as with respect to the “Additional Comments” here—Section 230 must be interpreted to protect websites not merely from ultimate liability, but from having to fight costly and protracted legal battles.

Should the judge in Elliott v. Donegan et al follow that precedent, a close call on the merits should result in the defendant being held immune under Section 230. At this point, given that many relevant facts and the scope of discovery in the case are unknown, no ultimate outcome would surprise me. If someone tries to tell you differently, ask them how they’re mapping this case onto the precedents in Roommates, Batzel, and Carafano.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.