Josh Edelson / AFP / Getty

This past week, with some fanfare, Facebook announced its own version of the Supreme Court: a 40-member board that will make final decisions about user posts that Facebook has taken down. The announcement came after extended deliberations that have been described as Facebook’s “constitutional convention.”

We don’t usually use sweeping terms such as Supreme Court and constitution to describe the operation of private companies, but here they seem appropriate. Internet platforms such as YouTube and Facebook have been called the modern public square. That description understates the platforms’ importance for the many people who use them in place of newspapers, TV stations, the postal service, and even money. People whose posts are removed from major platforms say they are being excluded from the most important communication channels of our age. We should care deeply about the rules these companies apply to our speech and behavior—whether PayPal should process donations to WikiLeaks, for example, or whether the security provider Cloudflare should protect neo-Nazi sites or 8chan, or whether Facebook should have taken down the famous Vietnam War photo of a naked girl fleeing her village.

But private platforms aren’t really the public square, and internet companies aren’t governments. That’s exactly why they are free to do what so many people seem to want: set aside the First Amendment’s speech rules in favor of new, more restrictive ones. Messages we might once have heard from a soapbox in the park—including very troubling ones about dangers of vaccines, conspiracy theories, or racist agendas—can be banished from social-media platforms.

The prevailing framework for free expression is getting a do-over. The existence of private Supreme Courts and constitutional conventions makes it possible, even easy, to imagine that this do-over will be governed by the same constraints as real-world governments: constitutional rules to protect individual rights, and democratic processes to set the laws we live under. Other rules that are being retrofitted for private internet platforms also sound like the mechanisms that keep real-world governments accountable to the public. Advocacy groups have demanded appeals or other due-process-like rights for people accused of violating those rules. Some advocates—concerned that “behind-the-scenes lobbying,” to borrow The Wall Street Journal’s phrase, is shaping platforms’ speech rules—are calling for the same kind of transparency we would expect from real legislators about their interactions with lobbyists.

Those are all good developments—up to a point. But we should not fool ourselves that mimicking a few government systems familiar from grade-school civics class will make internet platforms adequate substitutes for real governments, subject to real laws and real rights-based constraints on their power. Compared with democratic governments, platforms are far more capable of restricting our speech. And they are far less accountable than elected officials for their choices. We should pay close attention to those differences before urging platforms to take on greater roles as arbiters of speech and information.

Major platforms can restrict our speech more effectively than any government in history. They can’t jail dissenters, but they can silence them. Increasingly, platforms such as Facebook rely on software that can monitor everything we write, route it for review, or just automatically delete forbidden words or images. Platforms can take down lawful but hateful, harassing, misleading, or offensive posts, and do so very swiftly—even with appeals, they resolve in minutes disputes that might take courts months or years.

But the same lack of democratic or constitutional accountability that makes platforms so effective as content regulators makes them very hard to constrain in other ways. The platform versions of “due process” or legislative “sunshine laws” will be pale reflections of the originals. Twitter may seek public input on its speech rules, but users will never influence those rules the way voters influence actual legislation—by electing representatives to voice our interests, and ousting them if they fail to do so. Members of the Facebook Supreme Court are not charged with upholding the rights users have under the Constitution, human-rights law, or any other legal instrument. Their job, by and large, will be to interpret the platform’s existing policies. Our rights against platforms are, at best, Bill of Rights Lite.

Many platforms try hard to do the right thing in their speech policies. Google, where I worked for many years, poured substantial resources into defining responsible rules, and wrestled with a lot of very hard judgment calls. Platforms will also respond to outside pressure, including the Senate hearing last week over content posted by violent extremists. Every public outcry over the spread of fake news or hate speech nudges internet companies to assume more of the powers that Americans and citizens of other democracies deny their governments—to prohibit legal speech, keep users under pervasive surveillance, and use algorithmic content filters to make certain things literally unsayable. The more platforms assume these powers, the more we should worry about the fact that we won’t actually control how they will use them.

We should be realistic about who is likely to call the shots as private, for-profit platforms assume greater roles in restricting online speech. Most obviously, governments that control access to major markets are well positioned to influence platforms’ decisions worldwide. It’s no coincidence that YouTube, Facebook, Twitter, and Microsoft, which earn substantial portions of their revenues in Europe, apply European hate-speech standards globally—even in countries where that speech is legal. Less democratic regimes in the next round of emerging markets will no doubt want similar authority. Platforms’ rules are shaped by advertisers too. YouTube overhauled its content policies when advertisers threatened to defect in 2017—and small filmmakers lost ad revenue as a result. Business partners can influence platforms’ decisions about content, and so can business rivals. A platform that fears bad press or legislative setbacks has reason to appease whoever can make those things happen.

If platforms aren’t accountable to us, why are we encouraging them to assume so much practical control over our speech? One line of thinking seems to be that it’s too late to do anything else—that platforms already have rules, so they might as well have better ones. That reasoning is almost always a one-way ratchet. It justifies telling platforms to do more instead of telling them to roll back changes they’ve already made. The evolution of the filtering algorithms many platforms use to automatically block images and videos online provides a good example. Platforms originally built tools such as PhotoDNA to detect the worst of the worst content: child pornography. Then, in 2016, they retooled the algorithms to identify content that platforms—applying their own terms of service rather than the law—designate as “terrorist.” Now the European Union’s highest court is considering whether Facebook must use filters to stop users from calling an Austrian politician a “lousy traitor” and a “fascist.” We can only guess what the next expansion of filters’ roles will be. If platforms’ existing policing capabilities are always a reason to demand more, it’s hard to identify any stopping point.

Treating platforms like governments—encouraging their control, but constraining them with flimsy versions of democratic input or due process—is not part of democracies’ usual playbook when companies gain too much power, or when their businesses cause harm. Market forces and competition are supposed to keep private commercial power in check. When those fail to do so, the next line of defense is to enforce competition law, not to establish new quasi-governmental rules for companies. If businesses are profiting from messes that hurt everyone else—as polluters or cigarette companies have done, and as many people allege platforms are doing now—governments can tax them to fund corrective measures such as waste cleanup or education campaigns. If a company is uniquely well positioned to solve a problem, as platforms may be in the case of viral content, Congress can empower courts to make them do it, as long as the content is actually illegal.

But we aren’t following the usual playbook, in which government sets the rules and companies follow them. We can’t—not as long as we want platforms to block speech that the U.S. government, following the First Amendment, has no power to restrict. Bypassing the First Amendment requires putting power in private hands. And that in turn means giving up on real democratic input, due process, Supreme Court review, or any of the other tools we use to constrain actual governments.

Prominent in the current “techlash” are two distinct criticisms: that tech companies have grown too strong, and that they aren’t doing enough to make users behave. In essence, critics are telling platforms to take on more power to police their users. If the pundits of 2009 were blithely confident that platforms would use their influence only for good, the pundits of 2019 seem to think this will still happen as long as platforms behave responsibly.

But the way that democratic countries issue instructions for responsible private behavior is by passing laws. Those laws are constrained by the Constitution, including the First Amendment, for a reason. Lawyers can argue—and some respected ones do—for more restrictive interpretations of the First Amendment, in which courts and lawmakers can prohibit online the sort of speech that would be legal in a park or a bar. But that kind of legal rule change would be an incremental and careful process. Instead, private companies are reshaping our speech norms in fits and starts. This big do-over is what happens when we put platforms in charge. In our rush to deputize companies as enforcers of new rules for the public square, we are forfeiting constitutional protections and major aspects of self-governance.

This article is part of “The Speech Wars,” a project supported by the Charles Koch Foundation, the Reporters Committee for the Freedom of the Press, and the Fetzer Institute.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.