Peter Beinart: What the measles outbreak really says about America
The benefit of the information glut is that something novel always appears at the top of our Facebook and Twitter feeds. However, this engineered serendipity has come at a cost: We’ve become conditioned to expect engaging, up-to-date content when we want it. When a major crisis breaks—a mass shooting, a natural disaster, an emerging political scandal—we refresh and refresh, and usually see new information every time. But the COVID-19 pandemic is not like those crises. Under normal circumstances, rigorous scientific research about a new disease takes months or years. Reliable new information is, simply, slow to appear. Yet people keep searching—for information about symptoms, spread, treatments, fatality rates. In response, the algorithm returns something.
The feed abhors a vacuum. But in many cases, algorithms have little or no authoritative content to push to users—because experts haven’t bothered to produce any, or because what they have produced simply isn’t compelling to the average social-media user. Their work is locked in journals, while bloggers produce search-engine-optimized, Pinterest-ready posts offering up their personal viewpoint as medical fact. And with COVID-19, as in past outbreaks, anti-vaxxers and related influencers with a tenuous hold on reality jumped on the emerging topic early, posting repeatedly about synthetic-virus and mass-vaccination plots ostensibly hatched by Bill Gates and Big Pharma.
But around the same time, in late January, exceptionally prescient voices were also tweeting with increasing alarm, about very real risks. Reputable figures such as Scott Gottlieb, Donald Trump’s former FDA commissioner, and Carl Bergstrom, a biologist at the University of Washington, used Twitter to walk the public through emerging research and information that seemed to indicate that, despite the reassurances of world leaders and health ministers, events were headed down a path that could be very bad indeed. Notably, these people presented evidence while also acknowledging its limitations. As accurate, up-to-date information began to seem like a matter of life and death, an increasing percentage of the public began to wonder if what elected officials, institutions, and the media were telling them was in fact correct.
Determining who is an authoritative figure worth amplifying is more challenging than ever. Curated, personalized feeds enable bespoke realities. Trump supporters trust Fox News or One America News Network, while liberals follow a very different set of trusted sources. The legitimacy of media outlets is constantly questioned. Internet users have made collages of statements from mainstream publications that did not age well—for instance, early headlines and chyrons that could be interpreted as downplaying the threat from the coronavirus—and tweeted them out to dismiss the competence and quality of all mainstream media. Meanwhile, self-published Medium posts and tweetstorms by people with varying degrees of expertise—including none at all—regularly go viral. Some are highly accurate and well researched, deserve attention, and merit discussion; others are garbage pushed by grifters. The algorithm is responsible for deciding what, out of all this, to surface.