As a shooter in Christchurch, New Zealand, set about massacring dozens of worshippers at two mosques on March 15, his body cam beamed live footage to social media. Soon after, Susan Wojcicki, the CEO of YouTube, learned that it was being uploaded to the platform. The company put thousands of human beings and a pile of algorithms to work finding and removing the snuff footage. It was already too late. As The Economist recounted not long ago, “Before she went to bed at 1am Ms Wojcicki was still able to find the video.” And no wonder: It was being uploaded as often as once every second, a dispersal “unprecedented both in scale and speed,” as a YouTube spokesperson told The Guardian. Facebook, also scrambling, removed the video from users’ pages 1.5 million times in the first 24 hours after the shooting. Yet nearly two months later, CNN reported still finding it on Facebook.
Not long before the attack, Justin Kosslyn, who was then an executive at Jigsaw, a technology incubator created by Google, had published an article on Vice.com called “The Internet Needs More Friction.” The internet, he argued, was built for instantaneous communication, but the absence of even brief delays in transmission had proved a boon to disinformation, malware, phishing, and other security threats. “It’s time to bring friction back,” he wrote. “Friction buys time, and time reduces systemic risk.”
Kosslyn was onto something—something whose implications extend well beyond the threats he discussed. Despite the Christchurch video, YouTube (reports The Economist) is not about to rethink the premise that “people around the world should have the right to upload and view content instantly.” But YouTube should rethink that premise, and so should the rest of us. Instanticity, if you will, is turning out to be a bug of online life and internet architecture, not a feature.
For a long time, through the internet’s first and second generations, people naturally assumed that faster must be better; slowness was a vestige of a bygone age, a technological hurdle to be overcome. What they missed is that human institutions and intermediaries often impose slowness on purpose. Slowness is a social technology in its own right, one that protects humans from themselves.
Take, for example, old-media publications such as The Atlantic, The New Yorker, and The New York Times. The digital operations of all three are speedy. But almost nothing goes online without first being vetted by at least one pair of editorial eyeballs. That costs money and slows down the content flow, of course, and for a time, many old-media types wondered whether our cumbersome, expensive bureaucracies were on their way to being obsolete. After all, social media promised to unleash millions of on-scene, real-time reporters, while allowing readers to curate their own news feeds and allowing experts to weigh in without being filtered by journalists. Who needed professional editors?
But old media’s premises turned out to be anything but obsolete. As a group, consumers are terrible editors. Many are poorly informed, inaccurate, biased, manipulable, sloppy, impulsive, or self-serving. And even though some are not, the bad can quickly drive away the good. I am not suggesting that social media should be edited in the style of a newspaper circa 1983. Even if old-style editing of, say, Facebook’s more than 1 billion daily posts were feasible, it would not be desirable; that degree of friction would defeat social media’s self-expressive purpose.
Still, the lessons of old media remain relevant. Social-media companies do, after all, practice a certain kind of editing. They have rules that promote some types of content and prohibit other types, and they maintain systems to delete or demote violations. Facebook deploys both artificial intelligence and thousands of human beings to identify and remove 18 types of content, such as material that glorifies violence or celebrates suffering. So editing is happening. It’s just happening after publication, instead of before, partly because instanticity allows no time for prior vetting—even by the user herself.
Imagine a simple change. A user creates a post or video on Facebook, Twitter, YouTube, or wherever. She presses the button to post it. And then … she waits. Only after an interval does her post go live. The interval might be 10 minutes, or it might be an hour, or it might be user-selected.
During that interval, something might happen. The user might receive a warning that a factual claim in her post had been disputed by leading fact-checkers. Facebook already provides such warnings, offering fact-checkers’ appraisals and asking users whether they wish to proceed anyway. Or, if she chose, her post might be routed to a handful of trusted friends, who might advise her that she was about to tweet herself out of a job. Or, toward the end of the interval, she might be required to view a screen displaying her post and asking, “Are you sure you’re ready to share this with the world? Remember, it will be out there forever.” Meanwhile, algorithms and humans could ensure that she isn’t posting a snuff video.
The point is not the particulars. There is no single right way to introduce slowth. The point, rather, is this: Strategically introduced friction gives platforms and users time to vet content in whatever way they deem appropriate. It might reduce the velocity of something like the Christchurch video enough to give platforms’ monitors a fighting chance.
Even if nothing at all—no checking or vetting or reviewing—were done in the interval before a post or video went live, the waiting period itself would offer an important advantage. It would allow thought.
Humans have not one but two cognitive systems. In his book Thinking, Fast and Slow, the Nobel Prize–winning psychologist Daniel Kahneman calls them System 1 and System 2. System 1 is intuitive, automatic, and impulsive. It makes snap judgments about dangers such as predators or opportunities such as food, and it delivers them to our awareness without conscious thought. It is also often wrong. It is biased and emotional. It overreacts and underreacts. System 2, by contrast, is slower and involves wearying cognitive labor. It gathers facts, consults evidence, weighs arguments, and makes reasoned judgments. It protects us from the errors and impulsivity of System 1.
We need both systems, especially if we care about anger management. Arthur C. Brooks, a social scientist and the author of the recent book Love Your Enemies: How Decent People Can Save America From the Culture of Contempt, told me in an email that one of the most effective ways to tone down social hostility is to “put some cognitive space between stimulus and response when you are in a hot hedonic state—or, as everyone’s mom used to put it, ‘When you’re mad, count to ten before you answer.’ ” Slowing ourselves down gives time for System 2 to kick in.
Offline, our lives are hemmed in by institutions that force us to engage System 2, even when we are disinclined to. Children are taught to wait their turn before talking; grown-ups are frequently required to wait before marrying, divorcing, buying a gun. No matter how sure they may feel, scientists face peer review, lawyers face adversarial proceedings, and so forth. Also, back in the day, before instanticity, technology itself slowed us down. Printing and distributing words required several distinct stages and often multiple people; even a trip to the mailbox or a wait for the mail carrier afforded time for second thoughts. Abraham Lincoln, Harry Truman, and Winston Churchill were among the many public figures who wrote what Lincoln called “hot letters,” splenetic missives that vented anger but were never mailed. (Usually. One of Truman’s rants escaped and threatened a Washington Post writer with a black eye.)
On social media, no publisher or postal worker forces a pause. In 2013, a public-relations executive posted a tasteless and apparently racist joke on Twitter, intending (she later said) to satirize bigotry, not endorse it. Then she boarded an 11-hour flight. By the time she disembarked, she was world-famous, and not in a good way. She lost her job and became a pariah. What if Twitter had required her to pause for a while, and then asked her whether she was sure about her tweet? We’ll never know, but time to reflect might very well have improved her judgment.
Recently, an acquaintance of mine found himself compelled to apologize for tweeting profanely. I asked him: Would a cooling-off period have made a difference? He replied that he would still have shared his thoughts, but more temperately. And, he added, if Twitter offered a setting requiring users to take 10 before tweeting, he would turn it on.
Instanticity is hard to walk away from. Social-media companies are addicted to addictiveness, and some might resist any curtailment of profitable impulsivity. Some users might also reject a cooling-off interval, or abandon a platform that imposed one. Yet many other people are already trying to count to 10 before they tweet, and would welcome help. And many tech-industry leaders are looking for ways to dial back internet-enabled pathologies. Rethinking instanticity would help us put our better selves forward, perhaps often enough to make social media more sociable.
This article appears in the August 2019 print edition with the headline “Wait a Minute.”
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.