Pablo Martinez Monsivais / AP

Two years and four months after Facebook found out that Cambridge Analytica might have illicitly pulled user data from its platform, and five days after the latest round of stories about the political consultancy’s electioneering, Mark Zuckerberg finally made a statement about the situation.

Despite Facebook previously contesting that it was a “data breach,” Zuckerberg offered up the exact solutions one might to a breach: assurances, small technical fixes, and some procedural improvements. Among other changes, Facebook will investigate apps that pulled in large amounts of its data in the past and ban those who are found to have misused data. The company will also inform people whose data has been misused, including those in the dataset that got passed to Cambridge Analytica. Zuckerberg introduced a new rule that Facebook will remove developers’ data access to users who haven’t logged in to their apps for three months. And finally, Facebook will place a notice at the top of News Feed, linking people to their app privacy settings.

This is the very minimum that Facebook had to do in this situation. It is impossible to imagine how they could not have taken any of these steps, given the public attention and pressure on the company.

But let’s look at the big questions that the Financial Times raised: “Why did Facebook take so little action when the data leak was discovered? ... Who is accountable for the leak? ... Why does Facebook accept political advertisements at all? ... Should not everyone who cares about civil society simply quit Facebook?”

On every single one of these questions, Zuckerberg offered nothing.

Facebook’s position has been considerably complicated by further revelations from people formerly inside the company and those who worked with the platform.

Sandy Parakilas, a former employee at Facebook who worked on a team dedicated to policing third-party app developers, told The Guardian that he warned Facebook executives about the problem of data leaking out of the company’s third-party applications, and was told: Do you really want to see what you’ll find? “They felt that it was better not to know,” he said. “I found that utterly shocking and horrifying.”

Equally troubling, Carol Davidsen, who worked on the Obama 2012 campaign, recently tweeted that Facebook knew they were pulling vast amounts of user data out of the system to use in political campaigning and did nothing to stop them. “Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing,” she said. “They came to office in the days following election recruiting and were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.”

The Obama team was not doing exactly the same things as Cambridge Analytica, but this is a shocking revelation about how much data was leaving Facebook, and how little was done to stop it.

In Zuckerberg’s statement about the weekend’s scandal, Facebook lays the blame squarely on a Cambridge psychology professor, Alex Kogan, for building an app that vacuumed up data from unwitting users and stored it outside the system, so that it could be used by Cambridge Analytica. And that is fair: Users could not have imagined that when they took a personality quiz, they would end up in the voter targeting database of a company associated with Steve Bannon.

But that is clearly not the only issue here.

One problem, as the Tow Center for Digital Journalism research director Jonathan Albright explained to me, is that apps developed by third parties were crucial to Facebook’s growth in the early 2010s. In order to secure the loyalty of developers who were helping grow the platform without being employed by the company, Facebook used the other currency at its disposal: user data.

What Facebook offered was a platform of users and the knowledge of their connections to each other. And what Facebook wanted back from that was user and engagement growth. Each party got what they wanted.

Developers could use a set of tools unavailable to normal users, the “Graph API,” to pull all kinds of user data. “They were provisioning and handing out their users’ personal info en masse [via the Graph API] to whoever wanted it before the IPO—like ‘data candy,’” Albright said.

In 2010, The Wall Street Journal ran a series of articles detailing how many apps were both intentionally and unintentionally leaking data, including one company, Rapleaf, that was using it to build an ad-targeting product. And yet, the prospective data torrent continued all the way until 2015, even after a Federal Trade Commission investigation found that Facebook had misled users. It wasn’t until Facebook fully introduced the “more restrictive” Graph API 2.0 that the taps begin to be turned off.

“Among other violations of user trust, the commission found that Facebook had promised users that third-party apps like FarmVille would have access only to the information that they needed to operate,” wrote the University of Virginia media scholar Siva Vaidhyanathan. “In fact, the apps could access nearly all of users’ personal data—data the apps didn’t need.”

Facebook employees, then and even now, would probably argue that this occurred in a more technoutopian time. Data was supposed to be open. That is true. The idea of Web 2.0 had open architectures baked into it. And Facebook was a part of that movement in its early days.

But as I detailed earlier this week, the privacy flaws in that approach were known long before Cambridge Analytica was a twinkle in Robert Mercer’s eye. By the time Kogan created his app, Facebook had been told by external scholars as well as by at least one of their own employees that leaking third-party data was a huge potential problem.

And Facebook did very little—as far as the public record shows—for years. What efforts they did take were pushed on them by the FTC and undercut by lack enforcement, at least as Parakilas attests. When Facebook’s stated commitments to users ran up against its growth strategy, Facebook chose growth.

Perhaps the most disappointing part of Mark Zuckerberg’s statement is his refusal to talk about why its data policies were structured the way they were. Who made these decisions and to the benefit of whom? Part of reckoning with the company’s role in the world is being more transparent about the company’s own interests. Until that happens, it’s hard to see how Facebook can regain the trust of users who have been following the recent revelations.

Because many of the capabilities that seem to frighten people about Cambridge Analytica do, in fact, exist right within Facebook’s own ad-targeting infrastructure. At what point does Facebook look at those capabilities and decide that something needs to change? It seems like it is far past the time to begin such an investigation.

What do we conclude from the facts that Facebook allowed massive amounts of its user data to flow out, didn’t take serious corrective action when this issue came up internally or externally, and even now, is willing to take only the de minimis steps to address the smallest possible interpretation of the problem?

Do we really want to see what we find?

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.