Albert Gea / Reuters

It has been a wretched week for the American technology industry.

There is the big story: the double revelation that Cambridge Analytica, ostensibly a voter-profiling company, used data from 49.5 million Facebook accounts without securing users’ permission; and that thousands of third-party developers who once built seemingly innocuous apps on Facebook’s platform may have their own caches of private information.

This debacle also exposed Cambridge Analytica’s CEO as a pedigreed creep: Alexander Nix promised to bribe and blackmail politicians and used the N-word in company emails. Though Nix has now been suspended, he kept the confidence of some of President Trump’s most important supporters for years, including Steve Bannon and Rebekah Mercer, a co-owner of Breitbart News and a Cambridge Analytica board member.

But this was not the only—or even the most awful—nightmare to befall a tech company this week. On Sunday night, one of Uber’s self-driving cars struck and killed a woman in Tempe, Arizona.

Elaine Herzberg, a 49-year-old woman, was walking her bicycle across a road when a Volvo SUV, outfitted with Uber’s radar technology and in fully autonomous mode, collided with her. The car was traveling at 38 miles per hour in a 35-mile-per-hour zone, and it did not attempt to brake before striking her, according to Tempe police.

It is the first time that a self-driving car, operating in fully autonomous mode, has killed a pedestrian. Sylvia Moir, the police chief of Tempe, announced on Tuesday that Uber was likely not at fault for the collision. But after her department released footage of the collision on Wednesday, transportation experts said it showed a “catastrophic failure” of Uber’s technology.

The two stories did not perform equally in the press. By the middle of the week, the Uber news had drifted off the front pages of The New York Times, The Washington Post, and CNN. It often sat near the middle or bottom of the page on Techmeme, a website that aggregates technology news from dozens of outlets. The Cambridge Analytica story, meanwhile, consistently clanged around above the fold of every outlet. I found myself asking: Why?

Perhaps it’s because people still mostly believe the hype around self-driving cars. This isn’t surprising: I still mostly believe the hype. Statistically speaking, cars of all types are super-ubiquitous, high-speed murder machines. Automobiles kill about 102 Americans every day, according to government data. “Accidents,” a category which includes car crashes, are the fourth leading cause of death in the United States, according to the CDC.

Nor are nondrivers exempt from the carnage. Nearly 6,000 pedestrians were killed by a car in the United States in 2016. Hundreds of cyclists die every year as well.

So maybe the relative lack of coverage of the Uber crash represents a healthy perspective. It suggests, perhaps, that journalists and the public understand the difference between anecdote and data. Sixteen Americans die every day while walking near a street. Most of us never learn their names. What makes Elaine Herzberg different?

Not to give the American press too much credit. Surely the Facebook story dominated the news cycle because it is a tendril of the much bigger story, of the ungainly leviathan that oozed to the center of our national attention two years ago and has never departed. The Cambridge Analytica scandal is a story about the 2016 election. Of course we can’t get enough of it.

Yet perhaps we’re wrong to treat them as two different stories at all. Americans once decided to let Facebook operate with more or less free rein, and we are now paying the price for its lack of governance. Will we grant Uber, and the broader self-driving car industry, the same power? This is the thread that joins Facebook and Uber’s nightmares. Pull at it, and the entire way we think about technology in America starts to unravel.


The Uber collision was a tragedy for the Herzberg family, who have said they may try to press criminal charges. But for the self-driving car industry, it was more or less an inevitability.

Almost alone among emerging technologies, the self-driving-car industry is pursuing a clear and seemingly attainable goal. Even as the timelines for other new products—MOOCs, lab-grown meat, virtual reality—have lengthened, self-driving cars remain stubbornly on track for mass deployment. Waymo’s autonomous cars have driven more than 4 million miles on public roads. Several automakers, including Ford and BMW, have pledged to sell self-driving cars to consumers by 2021.

The collision in Tempe could have challenged that dominance. Two years ago, Joshua Brown, a 40-year-old Ohio man, was killed after his Tesla crashed while in semiautonomous “autopilot” mode. But Brown was in the driver’s seat at the time, and he appears not to have heeded Tesla’s requirement that he keep his hands on the wheel, a report from the National Transportation Safety Board found.

Herzberg’s death was the first time that someone was killed by a self-driving car despite not having signed up for the risk of sitting in one. That’s a big deal. In five years, maybe only 10,000 Americans will own a self-driving car. Each of them will be a potential Joshua Brown. But every American will have to share the road with those vehicles. By 2023, there could be 325 million potential Elaine Herzbergs.

The self-driving car industry has long championed the idea that autonomous driving will be a great victory for public health. Three hundred thousand lives saved per decade in the United States, promised a 2015 white paper from two McKinsey consultants. If true, autonomous driving would save more people than seat belts and airbags do. Self-driving cars would be as virtuous as vaccines.

But what if the technology doesn’t work very well? What if this huge, accelerating industry—which claims support from Silicon Valley, Wall Street, and Capitol Hill—needs to slow down? The first pedestrian killed by an autonomous vehicle will attract more attention than the second, the third, or the fourth. One day, dead pedestrians will cease to be news at all. They’ll just be statistics.

This is the time to talk about the safety of self-driving cars, in other words.


On Wednesday, the Tempe police department released a video of the collision. It is really bad. The film appears to show Herzberg come into view at least a second before she is hit. It also shows the person behind the steering wheel—the person whom Uber calls the “safety driver”—appearing to look down at something up to almost the moment of the collision.

The video stops before the moment of collision, but it may disturb some viewers:

“A typical [human] driver on a dry asphalt road would have perceived, reacted, and activated their brakes in time to stop about eight feet short of Herzberg,” according to a Bloomberg summary of an outside forensic analysis.

The Uber technology should have stopped early, too. Lidar, a laser-based radar technology used by Uber, should detect moving bodies in the dark, yet it did not detect Herzberg. “This is exactly the type of situation that lidar and radar are supposed to pick up,” David King, an Arizona State University transportation professor, told The Guardian. “This is a catastrophic failure that happened with Uber’s technology.”

The video “strongly suggests a failure by Uber’s automated driving system and a lack of due care by Uber’s driver (as well as by the victim),” agreed Bryant Walker Smith, a University of South Carolina law professor, when speaking to Bloomberg.

For now, Uber has stopped testing its entire fleet of self-driving cars. “The video is disturbing and heartbreaking to watch, and our thoughts continue to be with Elaine’s loved ones,” said the company in a statement. “Our cars remain grounded, and we’re assisting local, state, and federal authorities in any way we can.”

Until Uber or the federal government releases more information, it’s hard to say why, exactly, the crash happened at all. But it’s very easy to say why it happened in Arizona.

As it happens, a piece of advertising from Arizona tells that story best. In February, the Arizona Commerce Authority ran a proud piece of sponsored content in the Harvard Business Review.

“As the debate about testing and regulation of driverless cars arose in several other states, in 2015, Governor [Doug] Ducey stepped forward and signed an executive order instructing state agencies to ‘undertake any necessary steps to support the testing and operation of self-driving vehicles on public roads within Arizona,’” bragged the ad.

Under Governor Doug Ducey’s direction, the state has played a leading role in this dramatic evolution of mobility—and it’s done so by getting out of the way.”

Suddenly, Arizona became the industry’s most deregulated state. Self-driving-car manufacturers did not need to apply for the right to test their technology on the roads, as California requires. They did not need to report how many times a human took over. Nor did they need to apply for a police escort, as New York mandates.

The order worked. “Self-Driving Cars Flock to Arizona, Land of Good Weather and No Rules,” reported Wired. Waymo, Intel, Uber, and GM all opened autonomous testing facilities in Arizona. More than 600 self-driving vehicles took to the state’s roads. And so began a war of deregulation: In February, California told self-driving carmakers they could test the vehicles without putting anyone in the driver’s seat. Ducey, not to be outdone, allowed the practice in Arizona few days later.


Facebook was never quite feted in this way—but it has benefited from a much longer legacy of lax regulation. For decades, American lawmakers in both parties have tried to clear the way for technological innovation by getting out of its way entirely. The Communications Decency Act of 1996 made it virtually impossible to sue online platforms for defamation or libel. Regulators have never stepped in to stop Facebook from buying another social network, and modern interpretations of antitrust law make it difficult to prosecute Facebook for anticompetitive behavior.

Facebook collects and controls an incredible amount of data. But Congress has never attempted to regulate this hoard or even understand it. There is no equivalent of the Fair Credit Reporting Act, which guarantees that Americans have access to surveillance of their finances, for social-network data.

The only significant federal action against Facebook occurred in 2011. The Federal Trade Commission alleged that the social network lied to the public and did not accurately represented how it handled user data. It reached a settlement with the company later that year, and Facebook must now undergo biannual audits of its privacy practices until 2031. Returning to that consent decree now, the FTC’s list of incriminating facts reads like a preview of the current scandal:

  • Facebook told users they could restrict sharing of data to limited audiences—for example with “Friends Only.” In fact, selecting “Friends Only” did not prevent their information from being shared with third-party applications their friends used.
  • Facebook had a “Verified Apps” program and claimed it certified the security of participating apps. It didn't.
  • Facebook promised users that it would not share their personal information with advertisers. It did.

Less than a year after the FTC ended its investigation, Facebook had its initial public offering. Mark Zuckerberg and other longtime employees became multibillionaires. Facebook continued to entrench its power as a dominant American institution, capable of reshaping journalism and possibly even tilting election results.

In the wake of the Cambridge Analytica news, the FTC has now opened another investigation, according to The Wall Street Journal. Maybe it will find something that requires new sanctions. Maybe it will take a harder line against the company.

But even if it does, the FTC will be correcting for damage that’s already been done. Uber’s tragedy presents an opportunity for regulators to get out ahead of a problem. What if Americans treated this first pedestrian death as we should have treated the 2011 Facebook investigation? When the government finally does investigate what happened on Sunday night in Tempe, what if lawmakers responded to it by promising to limit corporate power instead of gamboling down the path of deregulation?

Autonomous cars really might usher in a new era of safer roads and cleaner air. They really might change the world by connecting us all. But Facebook sounded nice once too. What if we actually learned something this time?

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.