Does the Internet Breed Creativity or Destroy It?

Editor’s Note: This article previously appeared in a different format as part of The Atlantic’s Notes section, retired in 2021.

What the internet does to the mind is something of an eternal question. Here at The Atlantic, in fact, we pondered that question before the internet even existed. Back in 1945, in his prophetic essay “As We May Think,” Vannevar Bush outlined how technology that mimics human logic and memory could transform “the ways in which man produces, stores, and consults the record of the race”:

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.

Bush didn’t think machines could ever replace human creativity, but he did hope they could make the process of having ideas more efficient. “Whenever logical processes of thought are employed,” he wrote, “there is opportunity for the machine.”

Fast-forward six decades, and search engines had claimed that opportunity, acting as a stand-in for memory and even for association. In his October 2006 piece “Artificial Intelligentsia,” James Fallows confronted the new reality:

If omnipresent retrieval of spot data means there’s less we have to remember, and if categorization systems do some of the first-stage thinking for us, what will happen to our brains?

I’ve chosen to draw an optimistic conclusion, from the analogy of eyeglasses. Before corrective lenses were invented, some 700 years ago, bad eyesight was a profound handicap. In effect it meant being disconnected from the wider world, since it was hard to take in knowledge. With eyeglasses, this aspect of human fitness no longer mattered in most of what people did. More people could compete, contribute, and be fulfilled. …

It could be the same with these new computerized aids to cognition. … Increasingly we all will be able to look up anything, at any time—and, with categorization, get a head start in thinking about connections.

But in Nicholas Carr’s July 2008 piece “Is Google Making Us Stupid?,” he was troubled by search engines’ treatment of information as “a utilitarian resource to be mined and processed with industrial efficiency.” And he questioned the idea that artificial intelligence would make people’s lives better:

It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

Even as Carr appreciated the ease of online research, he felt the web was “chipping away [his] capacity for concentration and contemplation.” It was as if the rote tasks of research and recall, far from wasting innovators’ time, were actually the building blocks of more creative, complex thought.

On the other hand, “you should be skeptical of my skepticism,” as Carr put it. And from the beginning, one great benefit of the internet was that it brought people in contact not just with information, but with other people’s ideas. In April 2016, Adrienne LaFrance reflected on “How Early Computer Games Influenced Internet Culture”:

In the late 1970s and early 1980s, game makers—like anyone who found themselves tinkering with computers at the time—were inclined to share what they learned, and to build on one another’s designs.  … That same culture, and the premium it placed on openness, would eventually carry over to the early web: a platform that anyone could build on, that no one person or company could own. That idea is at the heart of what proponents for net neutrality are trying to protect—that is, the belief that openness is a central value, perhaps even the foundational value, of what is arguably the most important technology of our time.

But as tech culture evolved and pervaded life outside the web, even its problem-solving methods began to seem reductive at times. Ian Bogost outlined that paradox in November 2016 when a new product called ketchup leather was billed as the “solution” to soggy burgers:

The technology critic Evgeny Morozov calls this sort of thinking “solutionism”—the belief that all problems can be solved by a single and simple technological solution. … Morozov is concerned about solutionism because it recasts social conditions that demand deeper philosophical and political consideration as simple hurdles for technology. …

But solutionism has another, subtler downside: It trains us to see everything as a problem in the first place. Not just urban transit or productivity, but even hamburgers. Even ketchup!

So, what’s your personal experience of how the internet affects creativity? Can you point to a digital distraction—Netflix, say, or Flappy Bird—that’s enriched your thinking in other areas of your life? On the flip side of the debate, can you point to a tool like email or Slack that’s sharpened your efficiency but narrowed the scope of your ideas? We’d like to hear your stories; please send us a note: