Read: The extraordinary decisions facing Italian doctors
And yet, leaders or citizens have struggled to admit that they know very little about what a reasonable and necessary response might look like in the long run. Health officials recommend staying home if possible, but politicians, such as Representative Devin Nunes of California, have encouraged Americans to dine out in order to support the local economy. Yesterday, the CDC recommended against gatherings of 50 people or more, but some schools and universities resisted changes. Tonight, the University System of Georgia, where I teach, finally announced its intention to move instruction online.
The idea that an extreme reaction, such as closing schools and canceling events, might prove to be an overreaction that would look silly or wasteful later outweighs any other concern. It can also feel imprudent; just staying home isn’t so easy for workers who depend on weekly paychecks, and closing is a hard decision for local companies running on thin margins. But experts are saying that Americans can’t really over-prepare right now. Overreaction is good!
It’s hard to square that directive with the associations we’ve built up around overreactions. Ultimately, overreaction is a matter of knowledge—an epistemological problem. Unlike viruses or even zombies, the concept lives inside your skull rather than out in the world. The sooner we can understand how that knowledge works, and retool our action in relation to its limits, the better we’ll be able to handle the unfolding crisis.
The Y2K bug offers a more complex and therefore more relevant example, and one that, unlike my municipal-snowstorm woes, touched everyone: In the late 1990s, leading up to the year 2000, computing professionals warned that legacy computer systems, programmed to accept dates in a two-year format, were going to wreak havoc when the year suffix turned from 99 to 00. Furthermore, the legacy systems most affected by this problem were also the ones used to run complex and crucial systems, including banks, power plants, and air-traffic control operations, that could incite massive calamity if they went down.
Even so, nobody really knew what would happen if the bugs didn’t get remedied, because testing massive, distributed infrastructures at real-world scale is extremely difficult. In the face of this uncertainty, public and private contractors decided not to ignore the issue, but to do the expensive and onerous work of finding and hiring programmers who still knew the old languages that ran many of the legacy systems, just in case it might prove to be necessary.
Was it worthwhile? We have no idea. Efforts to verify matters have proven elusive: Maybe all the time and money that went into retrofitting old COBOL code on mainframes really did save human civilization as the clocks turned to midnight on January 1, 2000. Or maybe not. Unfortunately, the outcome—Hey! Whatever we did, it worked!—wasn’t celebrated. Instead, the whole affair quickly became an embarrassment, seen by many as a stupid boondoggle that enriched duplicitous consultants.