Welcome to the Big Blur

Thanks to AI, every written word now comes with a question.

An illustration of a pen flowing into an image of a person's head.
Paul Spella / The Atlantic; Getty

The question will be simple but perpetual: Person or machine? Every encounter with language, other than in the flesh, will now bring with it that small, consuming test. For some—teachers, professors, journalists—the question of humanity will be urgent and essential. Who made these words? For what purpose? For those who operate in the large bureaucratic apparatus of boilerplate—copywriters, lawyers, advertisers, political strategists—the question will be irrelevant except as a matter of efficiency. How will they use new artificial-intelligence technology to accelerate the production of language that was already mostly automatic? For everyone, the question will now hover, quotidian and cosmic, over words wherever you find them: Who’s there?

At its core, technology is a dream of expansion—a dream of reaching beyond the limits of the here and now, and of transcending the constraints of the physical environment: frontiers crossed, worlds conquered, networks spread. But the post-Turing-test world is not a leap into the great external unknown. It’s a sinking down into a great interior unknown. The sensation is not enlightenment, sudden clarification, but rather eeriness, a shiver on the skin. And as AI systems become more integrated into our lives, they will alter the foundations of society. They will change the way we work, the way we communicate, and the way we relate to one another. They will challenge our assumptions about what it means to be human, and will force us to confront difficult questions about the nature of consciousness, the limits of knowledge, and the role of technology in our lives.

The above was written half by myself and half by ChatGPT. Perhaps you could figure out which half is which if you parsed it closely or if you used an AI text detector. But how sure are you? Do you have the time or energy to figure it out? And in the end, how clear can you, or anyone else, be? We are entering a big blur, and its challenges are practical as much as philosophical.

Today, we witnessed the unveiling of GPT-4, the latest large language model from OpenAI. The new version is multimodal: You can input images or text, and generate text outcomes. (Put in a picture of what’s on your kitchen counter, for example, and ask what you should cook for dinner.) But the primary advance is in highly sophisticated linguistic tasks. “The distinction between GPT-3.5 and GPT-4 can be subtle,” OpenAI acknowledged with the release of the product. “The difference comes out when the complexity of the task reaches a sufficient threshold.” The new version is particularly good at exams: It tested in the 90th percentile on the uniform bar exam, and the 88th on the LSAT, although it still flunked AP English. The difference between GPT-4 and its predecessors is that it’s better, more human-seeming, at more things. The blur is getting blurrier.

Natural-language processing has lurched into the public consciousness with stagger steps. We met it through DALL-E 2, Stable Diffusion, then ChatGPT. Stories about AI typically portray one of two themes: fear or greed. Each new arrival has been filtered through a series of hopes and anxieties—entirely appropriate to recently evolved hominids confronted with some new phenomenon on the savanna. Will this kill me? Can I eat it? With the arrival of text-to-image generation, the cry soon went up that these new technologies would exploit and replace the handiwork of human artists. But creative people are still the ones commanding the programs. There is now a new kind of artist: the prompt engineer. When the San Francisco Ballet released an AI-generated ad campaign, it also employed nearly 30 designers and other creatives.

The conventional fear—It’s coming for our jobs!—underrated the consequences of artificial intelligence in a very real sense, as if these developments were akin to the arrival of the mechanical awl, as if the stakes were a handful of creative-class jobs. No, the arrival of GPT-4 and the language programs preceding it forces us to confront much bigger questions: What is the value of originality? How does language construct meaning? And even, what is the nature of a person?

Sam Altman, the CEO of OpenAI, presaged the release of GPT-4 with a remark that reveals just how far removed the technologists are from any serious discussion of consciousness. In a tweet, he predicted that soon “the amount of intelligence in the universe [would double] every 18 months,” as if intelligence is something you mined like cobalt. It seems necessary to repeat what is obvious from any single use of a large language model: The dream of an artificial consciousness is a nonstarter. No linguistic machine is any closer to artificial consciousness than a car is. The advancement of generative artificial intelligence is not an advancement toward artificial personhood for a simple, absolute reason: There is no falsifiable thesis of consciousness. You cannot find a researcher who can define, in a testable way, what consciousness is. Also, the limitations of the tech itself preclude the longed-for arrival of a manufactured soul. Natural-language processing is a statistical pattern-matching operation, a series of instructions, incapable of intention. It can only ever be the expressed intention of a person.

If an artificial person arrives, it will be not because engineers have liberated algorithms from being instructions, but because they have figured out that human beings are nothing more than a series of instructions. An artificial consciousness would be a demonstration that free will is illusory. In the meantime, the soul remains, like a medieval lump in the throat. Natural-language processing provides, like all the other technologies, the humbling at the end of empowerment, the condition of lonely apes with fancy tools.

That our antique fantasies and anxieties are useless wouldn’t matter so much if they weren’t so obscuring. OpenAI, the organization behind GPT-4, ChatGPT, and DALL-E 2, is concerned with the creation of an artificial general intelligence, or a machine that is smarter than a human. But to situate AGI in terms of people is not interesting. Instead, think of it as a problem-solving machine capable of flexibly moving between contexts.

A local example: A friend of mine has a son in French immersion. (I’m in Canada.) His son hates reading the school’s French children’s books. So my friend went to ChatGPT and had it write a children’s French book about his son’s favorite superhero, specifying the grade level and length. (OpenAI explicitly claims that one of the uses of GPT-4 will be sophisticated tutoring technologies. Khan Academy is one of its new partners.) ChatGPT followed the instructions. In algorithmic culture, if you want a book, you just ask a machine to make you one. The first blur is the line between the human and the mechanical in language. But from that blur will spread others, in this case the blur between creator and consumer. I literally cannot conceive of the consequences of this transition. What is a book if a reader automatically generates one at will?

There isn’t language to describe the mechanization of language. The word intelligence in artificial intelligence has been terribly misleading, and yet what other word would suit the case? ChatGPT is intelligent in the sense that it can create coherence. But by any other definition of intelligence, it isn’t. When Google announced its 540-billion-parameter language model, PaLM, last year, the company said, in some promotional materials, that PaLM is capable of “understanding.” Yes, PaLM can understand what you mean if you tell it to write a romantic poem or to translate a passage into Bengali. But as even some Google executives acknowledge, it doesn’t “understand” romantic poetry or Bengali as anything more than a series of patterns. It does not “understand” the way I understand romantic poetry or Bengali. It has understanding but not understanding.

The word understanding itself is now a blur.

Natural-language processing doesn’t analyze the meaning in words. It analyzes patterns in text-based tokens by way of a deep-learning technology called a transformer (the T in GPT). So a program like ChatGPT doesn’t process the first sentence of this paragraph in terms of subjects, verbs, and objects. It cycles through the connections between the hundreds of billions of words in its data set, which might one day comprise something like the entire internet. The essential blur is in the structure of the transformer: Its meaning comes through unfathomable processing.

The underlying structure of the tech, more even than its effects, will shape the future. In algorithmic culture, history itself will become a lump of supercomputer fodder from which meaning is extracted. To the transformer, all previous art, all previous language, exists as intellectual pulp. There is no difference between Yeats’s Byzantium and your most recent email. Natural-language processing is an unfathomable disintegration followed by an unfathomable reintegration. All human expression is like an enormous junkyard in fog, where a mechanical claw strips everything down to the smallest bolts and reconfigures them in any approximation you can name.

A disintegrated history means a disintegrated future. History as a lump of tokens cannot be reconfigured by a sudden gust of revelation into fresh insight or a new vision. All you will be able to do is make more past. All you will be able to write is more tokens. In algorithmic culture, the archives will be the source of power. They will also be prisons. Use ChatGPT for a bit and you’ll see the deal it invisibly offers: The machine allows you to write whatever you like, instantly, freely, with no effort, just so long as it’s like everything that has come before. GPT-4 is stronger than its predecessors, but it doesn’t change the fundamental arrangement.

The old fantasies about the future were strikingly poor. Space travel turned out to be a minor subset of the travel industry for the ultrarich. The metaverse is boring; not even its designers want to hang out there. Instead of the imagined utopias or dystopias rendered out of fear and greed that have consumed the imaginations of the recent past, technology is leading to a big blur. Instead of radical clarity, a deep and abiding confusion.

Confusion is natural. In one passage from The Gutenberg Galaxy, Marshall McLuhan described other periods of confusion at moments of technological changes to language:

An age in rapid transition is one which exists on the frontier between two cultures and between conflicting technologies. Every moment of its consciousness is an act of translation of each of these cultures into the other. Today we live on the frontier between five centuries of mechanism and the new electronics, between the homogeneous and the simultaneous. It is painful but fruitful. The sixteenth century Renaissance was an age on the frontier between two thousand years of alphabetic and manuscript culture, on the one hand, and the new mechanism of repeatability and quantification, on the other.   

McLuhan’s concept of the interface, published in 1962, is much more useful than disruption as a way of understanding the birth of natural-language processing. For McLuhan, the Renaissance was not a moment in time, or a period, or a revolution in thinking. Rather it was an exchange between different epochs. And that exchange was subtle and profound. For example, the regulation of print—the precision and replicability that distinguished typeset texts from scribal manuscripts—was an aesthetic framework in the approach to knowledge that gave rise to the scientific method. Some of the subtle and profound consequences of the translation between technologies took centuries to reveal themselves. McLuhan points out that the idea of  a personal voice in a continuous narrative—what we have come to think of as the defining feature of printed texts—did not arrive until long after the printing press.

Even in these early days, when the sheer power of these new linguistic tools still mesmerizes, the necessary counter-gesture is already surfacing. Artificial intelligence creates an object that is a subject, voices that aren’t voices, faces that aren’t faces. Algorithmic culture lives in between, in a world where the human is the flickering continuation of past patterns coughed up and then spat out ephemerally.

But the human isn’t going anywhere. Recently I attended a bar mitzvah. It’s a brilliant ceremony. You don’t just read from the Torah. You give a speech. To be an adult, in society, is to have something to say, a perspective that the community can take seriously. Why should you write your paper yourself? Because you’re a person. A person wants to be heard.

Every culture works by reaction and counterreaction. For several hundred years, the education system has focused on teaching children to write like machines, to learn codes of grammar and syntax, to make the correct gestures in the correct places, to remember the systems and to apply them. Now there’s ChatGPT for that. The children who will triumph will be the ones who can write not like machines, but like human beings. That’s an enormously more difficult skill to impart or master than sentence structure. The writing that matters will stride straight down the center of the road to say, Here I am. I am here now. It’s me.