From swiping right for dates; to grocery shopping from their desks; to following All the President’s Tweets in real time, Americans wholeheartedly have embraced big tech.
Only now we know: Our love affair with networked life also comes with costs.
Data breaches exposing Social Security and credit card numbers. Email hacks revealing corporate secrets. Fake news and weaponized disinformation. Algorithms that reinforce racial bias. Websites that enthusiastically recommend conspiracy videos. An advertising-driven online surveillance state that cheerfully connects us to puppy videos and perfect one-pot recipes, while funneling our private information to shadowy overseas electioneering firms.
Largely left to its own devices, Silicon Valley has made billions of dollars creating delightful and useful products—and billions more harvesting and selling our personal data. Yet with each new revelation of how our shares and likes can end up causing problems, calls for stronger and more vigorous oversight have grown. Today, even industry titans are asking the federal government to play a more active role in preventing abuses and promoting trust.
But what, exactly, should the new rulebook look like? And who should write it?
The Atlantic recently gathered policymakers, industry leaders, and other experts in Washington, D.C., to discuss the future of tech regulation. Underwritten by PwC, the conversation grappled with the thorny, unresolved questions that will shape our devices, apps, and lives for years to come.
Here are four key takeaways:
Privacy regulation is coming
Regulating big tech isn’t unheard of. In a landmark decision in 2000, French courts ruled that Yahoo auction sites peddling Nazi memorabilia violated the country’s laws and ordered the internet giant to prevent web users in France from visiting those sites.
More recently, the state of California has passed privacy, net neutrality, and bot-identification laws designed to protect consumers from abuses. Meanwhile, the European Union’s sweeping set of 2018 privacy rules, the General Data Protection Regulation (GDPR), requires tech companies to give users more control over their personal information—or otherwise face billion-dollar fines for noncompliance.
By contrast, the U.S. government has taken a more hands-off approach for a long time. But lawmakers, industry insiders, and other observers expect that to change.
Last year, the Trump administration began working with the 36-nation Organisation for Economic Co-operation and Development to create international guidelines for the design and use of artificial intelligence (AI). It also has fielded more than 200 public-comment filings about digital privacy laws.
“We do have a role to play on the government side to make sure that we are standing up for consumers, standing up for people’s rights and privacy,” said Representative Suzan DelBene, a Democratic congresswoman from Washington, and a former tech executive. She has introduced legislation in the House of Representatives that would more tightly regulate personal data. “We need to make sure there are rules of the road that all companies can follow.
“We are leaders in a lot of innovation... We should be really thoughtful and showing leadership, too, on standards.”
“We are leaders in a lot of innovation,” she adds. “We should be really thoughtful and showing leadership, too, on standards.”
Perhaps surprisingly, big tech increasingly agrees with that sentiment. For years, the industry has followed the doctrine of “move fast and break things,” prioritizing innovation and growth over social responsibility.
No longer. Today, tech leaders are calling for Washington to take action, in part, to rebuild customer trust following a series of high-profile missteps and scandals. Another reason is that policy fragmentation among states and nations increases costs for businesses while creating new risks and compliance headaches.
Last year, the Information Technology Industry Council, a Washington, D.C.-based lobbying group that represents many of tech’s largest firms, released a framework for potential congressional data privacy legislation.
“There’s a consensus that there should be a federal privacy law,” said Jason Oxman, ITI president and CEO. “And that’s a big step right there. The industry is united. I think consumer groups are united. I think we have broad agreement on that because the tech industry has lost the trust of its customers.
“If we can establish at a national level something consumers feel comfortable [with],” he added, “where they can say, ‘I don’t have a problem with the use of information, I feel as if my privacy is being protected by the industry because the government has established a framework for the industry to adopt,’ I think that gets us back to where we need to be.”
Other regulatory goals are TBD
To create that sort of protective framework, DelBene’s proposed bill would require firms to disclose how user data is being shared, allow users to “opt in” before that data can be used, and mandate that privacy policies are presented in “plain English.”
It also would give the Federal Trade Commission rule-making authority while empowering state attorneys general to pursue cases involving violations.
“People are seeing their data used in ways they never anticipated,” DelBene said. “And that absolutely needs to change. We need to make sure privacy is the default, that people have clear information of what happens to their data.
“Also, we need to have enforcement,” she added. “From a federal government standpoint, who is going to provide this enforcement and to make sure that these regulations really stick?”
Working in DelBene’s favor? Lawmakers, consumers, and industry leaders all consider strengthening consumer privacy and improving data security to be worthwhile regulatory goals. According to PwC research, just 25 percent of consumers in 2017 believed that most companies handle their private data responsibly; meanwhile, the U.S. Council of Economic Advisors estimates that malicious cyber activity cost the national economy between $57 billion and $109 billion in 2016.
“I worry if you try to solve a problem that you haven’t defined first, you might not solve that problem... you might end up creating problems you didn’t intend to create.”
But other targets for new laws are less clear. Take the issue of dependable and trustworthy online information, from posters and users who are, in fact, real people—not bots—to news articles from reputable, professional, and not actually non-Macedonian teenagers. Some believe that fake news and disinformation spread on social media public for political and business gain merits a vigorous government response—yet others argue that the real problem is tech companies suppressing certain points of view because of partisan political bias.
Similarly, disagreement exists over how to regulate AI, which is expected to contribute up to $15.7 trillion to the global economy in 2030, but also can produce marketplace discrimination and propagate the darker parts of the web.
FTC Commissioner Noah Phillips said that before effective tech regulations can be created and successfully enforced, lawmakers and other stakeholders need to define and agree upon two key things: The problems they want to solve, and the abuses they want to prevent. A good historical model: Washington’s crafting of environmental regulations and creation of the Environmental Protection Agency in the 1960s and 1970s, which targeted cleaner, healthier air and water.
“I worry if you try to solve a problem that you haven’t defined first, you might not solve that problem,” Phillips said. “And second, you might end up in a scenario where you’re creating problems you didn’t intend to create.”
Lawmakers will have to balance competing interests and priorities
Depending on the goals that the government sets, panelists said, the laws it adopts to achieve them will attempt to strike a series of tricky balancing acts:
- Between watered-down rules that fail to safeguard citizens, and heavy-handed intervention that stifles tech industry innovation.
- Between protecting free speech, and ensuring freedom from harmful and hateful online content.
- Between permitting superstar firms to grow and prosper in a marketplace characterized by winner-take-all network effects, and preventing the competition-squelching domination of a handful of digital Standard Oils.
- Between allowing personal data collection and analysis that helps companies create better products, and failing to prevent abusive and creepy customer profiling.
“I don’t want to go on[line] and have this mega-footprint of my digital profile as an African-American woman, and know that I’m being marketed higher-interest credit cards, or because I am a mother of two that, somehow, they figured out through inferential assumptions about my lifestyle or my digital activity that I’m actually within this [particular] box,” said Nicol Turner Lee, a fellow at the Center for Technology Innovation at the Brookings Institution. “You know, I do want all the other stuff. I want [streaming services] to tell me what movie to watch, and I want a big box store to recommend what product to actually get.”
“I don't want to go online and have this mega-footprint of my digital profile as an African-American woman, and know that I'm being marketed higher-interest credit cards.”
New regulations that impose new requirements also have the potential to hurt smaller companies and start-ups that can’t afford the added costs and complexity of compliance—all while strengthening the position of larger, wealthier, and more established firms that can.
“There were a lot of companies that when the GDPR went into adoption, their website[s] shut down,” Turner Lee said. “They were not compliant the day of adoption, and people were no longer able to get them. And so I think we have to be careful about that.”
Frida Polli, the CEO and co-founder of a company that makes software to reduce bias in the hiring process, said that the potential for regulation to stifle innovation can be overblown, as entrepreneurs and firms will adapt to new rules and constraints.
“Necessity is the mother of invention,” she said. “I don’t think we can really use some sort of regulatory environment as a reason for why innovation can’t still happen.”
A case in point: The Equal Employment Opportunity Commission (EEOC), which administers and enforces laws against workplace discrimination, already has regulatory authority over the software made by Polli’s company. To comply with EEOC regulations governing adverse impact—that is, hiring practices that work to the disadvantage of members of a particular race, sex, or ethnic group—the company internally audits the custom algorithms it builds for firms to help them screen and sort job applicants.
“A lot of times, what would traditionally happen is that [our] tool would go live, and then you would find the adverse impact,” Polli said. “So we’re sort of preempting that because of the idea that this regulation is going to be applied to us. In that sense, I think it’s actually made us more innovative.”
Big tech and government will have to work together
Panelists agreed that neither can go it alone. Without independent and thoughtful oversight, an industry built on speed, scale, and disrupting the status quo will produce unintended and sometimes harmful consequences for society at large. Individual firms, for example, that have reported data breaches have seen their stocks underperform in the market over time.
Meanwhile, lawmakers often lack the resources and expertise to keep pace with technological changes. “One of the challenges that we’ve had in Congress is folks haven’t been sure how to tackle them,” DelBene said. “It’s not so much their partisan issues, it’s [that] they’re covering areas that a lot of people don’t have familiarity with and expertise in. And so they’re afraid to move forward.”
In the future, government and big tech will need to work in a symbiotic partnership that leverages each other’s strengths, said PwC U.S. Advisory Risk and Regulatory Leader David Sapin. Lawmakers also must create the needed guardrails, and industry innovators will need to figure out how best to drive within them.
To wit: In 2018, the city of Chicago enacted an ordinance requiring hotels to supply their housekeepers with wearable panic buttons that would allow them to instantly summon help when being sexually assaulted or harassed by guests. The law was simple and well-meaning. But execution proved to be complex.
According to Sapin, the same geotracking technology that would allow emergency responders to precisely locate, say, a housekeeper on the 18th floor of a 400-room hotel could potentially also allow hotels to track their staffers’ movements throughout their shifts—leading housekeepers to say, “Hold on a second, are you going to track us all day long? So you’re going to know when I’m on break, when I’m outside having a cigarette, where I am all the time?”
“At the end of the day, the companies that are going to survive are going to be the ones that, when they deploy and develop new technologies, do it responsibly.”
The solution, Sapin said, was to build panic buttons that only track a wearer’s location once they’ve been pressed and activated by the employee. That wasn’t required by the law. But it came about because hoteliers understood and adhered to the law’s intent—enhancing public safety, not private surveillance.
“I could think of about 25 different things that [Chicago] could have built into the regulation,” Sapin said. “But is that the appropriate place to address every potential policy risk? If they tried to address every conceivable risk, it may actually slow the process to address the primary risk – the safety and well-being of the hotel staff.”
Ultimately, Sapin said, the most effective government-industry partnership will be one in which smart regulatory principles—the new rules of the road—guide not only back-end compliance by big tech, but also the front-end design of the apps and other products that we love.
“In this era of emerging technologies, it’s not going to just be about complying with the law,” Sapin said. “It’s also going to be about how you build responsible innovation into how you’re developing new products, how you’re thinking about new services. At the end of the day, the companies that are going to survive are going to be the ones that, when they deploy and develop new technologies, do it responsibly.”