Mark Zuckerberg at a meeting with Indonesian government officials.Beawiharta / Reuters

Mark Zuckerberg will be headed to Washington. No one knows precisely when or to whom, but he himself has said he would be “happy” to testify.

That he has never been before Congress is one of those minor miracles that only technology companies seem capable of generating through their bulky “policy” (i.e. lobbying) teams and still considerable popularity.

But times are changing and in the wake of the Cambridge Analytica affair, Facebook processes that have been known for years are coming under the most intense scrutiny they’ve ever received. Senator Ron Wyden, for example, has already submitted a formidable list of questions to Facebook.

I’m most interested in pinning down the facts around Cambridge Analytica and political advertising generally. But Facebook is multifaceted, so I reached out to a dozen close observers of the company to see what they wanted to ask Facebook’s CEO.

On Cambridge Analytica

  • When the way that Aleksandr Kogan and Cambridge Analytica were using Facebook user data came to light, where in the company was it detected?
  • How far up the executive chain did it get?
  • What did you know and when did you know it?
  • Furthermore, why did eight months pass between the news reports about Cambridge Analytica and the letter Facebook sent asking them to certify they’d deleted the data?
  • Who was vested with the power to contact and sanction developers like that?
  • In the history of the platform, how many times were developers sanctioned by Facebook for the misuse of Facebook user data?
  • Immediately after the Cambridge Analytica story came to Facebook’s attention, did any policy or process change within the company?
  • Realistically, enormous amounts of user data have already escaped FB’s platform, irreversibly. Auditing current platform users, as you’ve indicated you’ll do, can’t put the genie back in the bottle, and might even be pointless. So now what?

On Political Advertising

For years, Facebook touted its electoral advertising by showcasing a variety of electoral “success stories” in its marketing materials. In Rick Scott’s Florida gubernatorial campaign, for example, the company claimed to increase Hispanic turnout by 22 percent.

Facebook

Facebook has since removed these political case studies from the menu of “success stories,” though they remain technically online. This work raises a series of questions.

  • Is Facebook political advertising more effective than other forms of political advertising?
  • Why?
  • Do you believe one of Facebook’s advantages as an electoral-advertising platform was the ability to run targeted “dark” ads that allowed campaigns to tell very small audiences what they wanted to hear without the fear of public outing?
  • If the dangers of such a system are obvious—and they are—why were these types of political ads allowed (especially if you think they were not part of Facebook’s competitive advantage)?
  • Should there be restrictions on the targeting criteria that political advertisers use?
  • Facebook has embedded employees in major political campaigns, including Donald Trump’s in 2016. What were the precise roles that these staffers played?
  • Do you plan to continue to embed staffers in political campaigns?
  • Facebook has begun to offer limited ad transparency in Canada. Journalists and researchers have been disappointed with the ad transparency that Facebook rolled out in Canada. It shows only ad content and not targeting, reach, or engagement rates. Would you commit to meaningful ad transparency for all political advertising that shows targeting criteria, reach data, and engagement rates?

Given all the questions about ad targeting, it’s remarkable that it does not appear that anyone has audited how Facebook’s system works outside of Facebook. Julie Cohen, a professor of law and technology at Georgetown Law proposed asking Zuckerberg whether “either the FTC or the [Federal Election Commission] currently has the capability to audit/stress test Facebook’s ad-targeting system.”

Stephanie Hankey, the executive director of the Tactical Technology Collective, has been working on a worldwide project examining the role of data in elections. She notes that Facebook has a very active political-ads sales team, which has had a tool built that matches voter records with Facebook users. And they’ve pushed this out to many countries across the world from Australia to Chile to the United States.

All of this to say, far from just providing a vehicle for politicians to reach voters, Facebook is aggressively pushing campaigns to use more data by using the Facebook platform. Hankey suggests that Congress should ask about Facebook’s “active role” in political-advertising data sales. She also notes that campaigns upload their data to Facebook, which Facebook then keeps. Which brings up another question: What does Facebook do with the data that campaigns upload into its system?

On Facebook and Psychological Manipulation

The Cambridge Analytica case brought many users fear of psychological manipulation to the fore. We pour so much data into these systems and we know that they know a lot about us. Between humans, the more someone knows about you, the more likely they are to be able to persuade you in terms that you will find amenable. It’s why salespeople “build relationships.” Is the same true of this vast database? And where might Facebook’s persuasive power cross the line into coercion or manipulation?

One Dartmouth media-studies scholar, Luke Stark, proposed a series of questions asking Mark Zuckerberg to explain the data that Facebook “collects about mood and emotion.”

  • How does Facebook collect and store reaction-icon data (sad faces, happy ones, Likes, and wows) as well as sticker data (which is all categorized by mood)?
  • How does Facebook collect and store textual-sentiment analysis of people’s posts and comments?
  • Do these indicators of mood or psychological cast get incorporated into microtargeting criteria?
  • Can third parties access this data in the targeting process?

On What Facebook Knows That You Don’t Tell It

Perhaps the thing that disturbs people most about Facebook is when the system can figure something out about your identity that you didn’t tell it. That is to say, when Facebook combines your data with data about other people to infer new things about you. Those capabilities are becoming more powerful as Facebook’s bleeding edge artificial-intelligence capabilities grow.

One technology sociologist, Zeynep Tufekci, suggested a series of questions for Zuckerberg about detailing these processes.

  • What is the full range of computational inference (using existing data Facebook has collected or purchased or otherwise obtained to infer or impute other traits, or classify people into groups) that is done on Facebook users? What are the products and processes that use these?
  • Why isn’t each case of computational inference disclosed to the user every time, or available in the data that a user can download about themselves?
  • For example, why isn’t a person informed every time that they are targeted or included [in the audience for an advertisement] due to being identified as part of a “look-alike” audience through the Facebook audience-targeting tools that use artificial intelligence to infer or impute traits or classifications on individuals?

Facebook, of course, not only created its own core products, but also purchased Instagram and WhatsApp among other companies. The WGBH senior executive producer Bill Shribman suggested asking Zuckerberg about how Instagram data feeds into the system, which suggests the following question.

  • Does Facebook use machine learning to examine Instagram photos, videos, and stories in order to target advertising across the Facebook business platform?

David Robinson, formerly of Princeton University’s Center for Information Technology Policy and now the managing director of the social justice and technology nonprofit Upturn, came up with a line of pointed questions about Facebook’s use of artificial-intelligence “solutions,” which are reproduced in whole below:

  • On January 31, 2018, you stated that Facebook has “built AI systems to flag suspicious behavior around elections.” This isn’t the first time Facebook has turned to AI to help deal with bad ads. Back in 2016, press reports revealed that advertisers could use so-called ‘ethnic affinity’ marketing categories to potentially discriminate against people in the areas of housing, employment, and credit. On November 11, 2016 Facebook publicly committed to “build tools to detect and automatically disable the use of ethnic-affinity marketing for certain types of ads.” But a year later, the publication ProPublica found that the system you had built was letting housing ads through without applying the new restrictions. The company admitted this was due to a technical failure. If the solutions you build cannot be trusted to flag something as simple as a housing ad, there are clearly reasons to be skeptical that your company will be able to use the same sort of technology to correctly identify the more nebulous category of “suspicious behavior around elections.”
  • If you turn to AI to catch election meddling, how will you prove to the public that it works?
  • How will you define “suspicious behavior around elections”?
  • Will you publicly share details about how often you flag and remove ads, users, or pages that are flagged by such a system, in the same way you share information about how often you remove content in response to requests from governments around the world?

On Facebook’s Role in Society

Facebook has often talked about itself as a community, not a business, and has largely elided the difference between the two. I believe it’s worth putting this question specifically to Zuckerberg.

  • Has Facebook ever made a product decision that was in the best interest of the business, but not in the best interest of the community?

One Cornell professor, Ifeoma Ajunwa, wants to see Zuckerberg address Facebook’s role in broad terms, as well.

  • How does Facebook conceptualize its responsibilities or duties to its consumers? Does it see itself as a neutral platform for free expression? If so, to allow such free expression, does Facebook take on any responsibilities of attempting to protect user’s personal information?
  • If Facebook sees itself as more than a platform, then does it agree that it should function akin to a bank where it must take on heightened responsibilities for protecting its users info?

Ajunwa also poses a simple question about what should and should not change when a particular kind of activity becomes part of a platform. “By taking on the more public functions of traditional media (job posting, housing), does Facebook agree that it will be regulated under the same laws as would apply to newspapers or classifieds doing the same functions?” she asks.

One UCLA law fellow, Alicia Solow-Niederman, wonders what Zuckerberg expects the technical fluency of a user to be.

  • “What’s the technical education level required by the systems Facebook puts in place?” she asks. “What are they imagining as the responsibility of the user?”

On Possible Regulation

The long-time information-technology administrator Tracy Mitrano, who has previously tangled with tech titans over regulatory issues and is now running for Congress in the 23rd district in New York saw Facebook as having ducked its responsibilities under extant elections law.

  • “How is Facebook going to comply with the Federal Election Committee in terms of disclosing paid political advertisements?” she asks. “It’s not a question of whether, but how. They skated out of that in 2011 or 2012 and now we know that was a mistake.”

Like Georgetown’s Cohen, she stressed, based on her previous experience, the necessity of a mechanism for ensuring companies’ compliance with regulatory settlements. She suggested that government, perhaps through new laws, needs to develop the legal ability and technical capacity to investigate the workings of algorithms within regulatory proceedings.

Congressman Ro Khanna has been trying to figure out what legislation could actually clear the current Congress. After conferring with the former Twitter general counsel and Deputy CTO of the USA, Alexander Macgillivray, Khanna suggested asking Mark Zuckerberg if he would support a key provision of President Obama’s stalled Consumer Internet Bill of Rights. “You should have a right to know where your data is coming from, what is happening with your data, and where it is being sent,” Khanna said. He also mentioned pressing Zuckerberg on a regulatory requirement to notify users of any kind of data breach and increasing the portability of a user’s data.

And finally, Adam Holland of the Berkman-Klein Center for Internet and Society wants to know if there is any world in which Facebook agrees that it should be regulated as a public utility.

“Mr. Zuckerberg, as Facebook continues to grow and succeed and be used by ever more citizens of the world, can you imagine Facebook being or becoming so ubiquitous, so essential to daily life, that you would consider it appropriate for FB, and the information that it controls, to be regulated as a public utility/resource/basic human right? If so, what is that size/ubiquity, and if not, why not?”

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.