Elijah Nouvelage / Reuters

You won’t like Facebook’s new Oversight Board. Yesterday, the social-media giant unveiled its “charter” for a 40-person board with the power to review the company’s decisions about which content can appear on Facebook-owned platforms and which rules it applies when taking postings down. Deciding which videos are too violent, which photos too racy, and which behavior too “inauthentic” is a job destined to make the board unpopular. That it can be unpopular—with users, the media, and Facebook employees alike—and still exist is precisely the point.

Facebook is setting up its Oversight Board because, as the founder Mark Zuckerberg wrote, private companies should not “be making so many important decisions about speech on our own.” He has pleaded unsuccessfully with governments to tell him exactly what he needs to remove from his sites. Now he’s outsourced the final say on a range of decisions to the new board. Still unclear is who will be appointed to the body, how many disputes it will take up, how it will triage them out of millions of possible cases, and how precisely it will interpret the underlying “values” that Facebook released last week. But for now, these details matter less than the fact that someone is finally in charge of making difficult decisions about online speech in public view and on a principled basis.

Vowing to select members from “the widest possible set of diverse candidates from outside our normal channels,” Facebook itself will appoint by the end of the year the initial members, who will then help select the remaining members until the board reaches its full capacity of 40 people. The hope is that by appointing members for fixed terms with protection against removal for any reason except breach of a yet-to-be-announced code of conduct, the board will make decisions without regard to public opinion or Facebook’s business interests. Importantly, it will also be seen to be doing so.

The board’s decisions might not be any better in substance than the ones Facebook has been making already. Often, there is no “right” answer. And to hope that this body will finally resolve intractable disputes about the proper limits of free speech that have bedeviled lawyers and philosophers for centuries, well before all the added challenges of the digital age, is to misunderstand the board’s purpose.

What the board can do is explain the reasoning behind any particular decision or rule. This process of transparent, public reasoning is the main way people in a pluralistic community can come to view the rules that they have to abide by as legitimate. Research shows that people’s feelings about whether a decision is legitimate depend more on the process for reaching the decision—and, crucially, whether the decision is explained by reference to neutral, generally applicable rules—than on whether they agree with the choice.

This emphasis on explanation is seen throughout the charter. In nine pages, the charter repeatedly refers to the responsibility of the board to make decisions that are “explained clearly” “using clearly articulated reasoning” and “plain language.” The charter also instructs the board to give substantial weight to its own prior decisions—like common-law legal systems do—in deciding any case. This all amounts to the idea that the board’s decisions should be based on something more than mere gut feeling or personal opinion. That’s why it was a little simplistic for Zuckerberg to write in his letter yesterday that “just as our Board of Directors keeps Facebook accountable to our shareholders, we believe the Oversight Board can do the same for our community.” While the board should have regard to community interest, it should not merely be a proxy for public sentiment—otherwise Facebook could just decide these issues by poll. Decisions need to be based on something more fundamental.

This is the role of Facebook’s “values.” So while the new board and its charter are getting all the attention, the values that Facebook published last week are just as consequential. These values, like the charter itself, emphasize Facebook’s commitment to “voice,” but note that this needs to be balanced against the need to respect authenticity, safety, privacy, and dignity. Importantly, these documents expressly incorporate international human-rights norms as informing the board’s decision making. This grounding in a more widely endorsed set of principles is another bid for broader acceptance of the new rules.

The charter that Facebook unveiled yesterday comes after months of consultation with experts and the public around the world. A report on that process, released in June, chronicled a lot of hand-wringing over how the board should be set up. The only issue on which consensus emerged was that the board should not be limited to deciding individual cases, but should also have the power to influence Facebook’s policy development. This is a recommendation that the company somewhat reluctantly adopted; the final charter says that Facebook may request “advisory” policy guidance.

Obviously, influence over policy is a much bigger grant of power to the board. The policies that decide what stays up and what comes down determine users’ entire experience on Facebook and Instagram. But to confine the Oversight Board’s impact to individual disputes when it will hear only the tiniest fraction of the millions of content-moderation decisions Facebook makes every week would limit the board’s role significantly. And to ignore the 95 percent of people who said that the Oversight Board needs this power to be a legitimate check on Facebook would make a mockery of the consultation process. According to the final charter, however, how broadly Facebook implements the board’s decisions will depend on what the company deems “technically and operationally feasible”—a slippery and opaque standard.

This dialogue among the board, Facebook, and the broader public depends in no small part, then, on the actors operating in good faith. As is evident from current events around the world, constitutions—written or unwritten—can get you only so far. A lot depends on the norms that develop and the commitment of the people involved to carry out their roles reasonably.

The Oversight Board is fundamentally a bet by Facebook that the legitimacy of its decisions matters—and matters more than getting its way on every question. As other platforms seem to double down on the idea that they do not need to publicly explain their decisions or abide by their own rules when it does not suit their short-term interests, Facebook appears to be making a different wager: Accountability and legitimacy can reassure users—and regulators—of the value of its product. Acting in bad faith would undermine Facebook’s own gamble. Legitimacy, Facebook hopes, will become part of its value proposition.

The Oversight Board could be, as the former British deputy prime minister and current Facebook vice president Nick Clegg wrote yesterday, “a model for our industry.” Enough thought has gone into the proposal, at least, that it is unlikely to go the way of Google’s artificial-intelligence ethics board, which was dissolved before it even got started. Norms and legitimacy are built and earned over time. Facebook had to start somewhere.

This article is part of “The Speech Wars,” a project supported by the Charles Koch Foundation, the Reporters Committee for the Freedom of the Press, and the Fetzer Institute.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.