What AI Can Teach Us About the Myth of Human Genius

illustration of a typewriter
Getty / Adam Maida / The Atlantic

“Everyone knows it is impossible to turn the eyeball around, such that the pupil can peer inside the skull.” So says the narrator of Stanislaw Lem’s 1974 short story, “The Mask,” in which a young woman struggles to describe the experience of realizing that, under her skin, she is a robot. The story’s plot is fast-paced, but the reason the text is compelling—and upsetting—is Lem’s skill at bending language to try to imitate a profoundly nonhuman voice.

Lem’s story is but one canonical example of authors writing from the perspective of machines. Recently, the Nobel Prize–winning Kazuo Ishiguro has offered another machine narrator in Klara and the Sun, whose titular character is an “Artificial Friend.” By telling the story in Klara’s voice, Ishiguro ruminates on how machines may choose to narrate their lives and experiences, in ways that we might not completely understand. As he said in an interview: “It’s not just that AI might produce a novel that you can’t distinguish from an Ian McEwan novel. It’s that I think it might produce a new kind of literature, like the way modernism transformed the novel. Because AI does see things in a different way.”

Recommended Reading

Ishiguro’s book is fiction, but his suggestion that a new type of literature may be on the horizon is not. In May 2020, the San Francisco–based start-up OpenAI first publicly described its new language-processing software, which writes remarkably well. Generative Pre-trained Transformer 3, or GPT-3, is one of many recent advances in AI demonstrating that machines can do many basic and not-so-basic forms of digital labor. In turn, AI’s capacity for creativity—one of those supposedly sacrosanct human attributes—is becoming more and more of an existential sticking point as humans learn to live alongside intelligent machines.

“Given any text prompt,” the company’s website says, the GPT-3 interface “will return a text completion, attempting to match the pattern you gave it.” It can do this because it has been pretrained in semantic analysis by reading a huge portion of the internet. (A random smattering of inputs: dumpling recipes, 15th-century manuscripts, erotica, all of English-language Wikipedia.) GPT-3 can ingest tons of information and has already processed a ridiculous amount: hundreds of billions of tokens, or discrete words and numbers. It is a self-learning algorithm, meaning that the more material it reads, the more linguistic patterns it can teach itself to recognize.

GPT-3 will likely first be employed to do your typical text-based drudgery: replace your pharmacist with a chatbot, fill in image captions. But it can also perform more ambitious creative tasks, such as writing a new Shakespearean sonnet, a college essay, and a New York Times “Modern Love” column—all of which it has successfully done. Most news headlines have focused on this angle. (The Guardian: “A Robot Wrote This Entire Article.”) The preoccupation is not exactly whether the robot will steal some jobs (it may indeed); it’s whether the robot will encroach on our unique creative territory. Our humanity.

This anxiety is evident in many of the responses to early GPT-3 experiments. Reviewers are hasty in pointing out that GPT-3 is good at imitating human speech, but not perfect. We are, seemingly, anxious for it to remain in an intermediary zone: impressively competent but not threateningly virtuosic. We marvel when it manufactures a lovely sonnet and then chuckle when it makes a computer-y mistake.

However, comparing GPT-3’s creative skills with a person’s offers a relatively narrow set of terms with which to evaluate it. Practical applications aside, why do we obsessively measure AI’s ability to write like a person? Might it be nonhuman and creative? Might its profound difference constitute a form of creativity we could collaborate with and learn from? Being more open in this way could have another effect: It could shake loose long-held notions of what constitutes human creativity, too.


A few months after GPT-3 was announced, the U.K.-based Ignota Books published a book they described to me as “real-life science fiction.” Pharmako-AI, a 148-page collaborative exchange between GPT-3 and the human author K Allado-McDowell, is now being launched in the United States.

Allado-McDowell has plenty of experience with both art and AI; they (the author uses they/them pronouns) head Google’s Artists and Machine Intelligence program and got early access to the software. (So far, GPT-3 access has been restricted because of OpenAI’s well-founded concerns that it could be used for “harassment, spam, radicalization, or astroturfing.” Remember Microsoft’s AI chatbot that became a hate-spewing machine in less than 24 hours?)

In meandering dialogue, the book dives into topics such as the way memory functions, or the limits of language. Allado-McDowell begins each chapter with a gentle prompt—a diary entry about a day at the beach, a question about science fiction—and lets GPT-3 respond, sometimes interjecting with replies and sometimes letting it run. Allado-McDowell was responsible for Pharmako-AI’s framework and presentation, but, as the book’s introduction clearly states, had a goal of giving the AI as much autonomy as possible.

Chapters focus on such wide-ranging topics as climate change, plant intelligence, ayahuasca, and the legacy of cyberpunk fiction. GPT-3 has a favorite animal (the elephant seal) and believes that species extinction is a tragic erasure of planetary knowledge. About cyberpunk, it says, “I’m not going to tell you that we live in the cyberpunk future. But I am going to say that we live in a future we didn’t plan for.” Things get meta when the two ruminate on what consciousness means, the role of the writer in society, and how to responsibly use technology. The AI claims that “technology is a tool for freedom,” while cautioning that “if we only use these tools to explore new productivity hacks, or to increase the scope of capital accumulation, we are doing it wrong.” Damn.

The result is a surprisingly coherent—and yes, beautiful—work. It’s impressive not because GPT-3 writes like a human (it does and it doesn’t), but because of how the collaborative process has produced a work that neither AI nor human could have written alone. This is most evident in places where the syntax or form change drastically as the authors riff on each other’s language. Take Chapter 12. Allado-McDowell starts by asking why both authors have so far mainly referenced the work of famous men in their discussions of computation and futurism. They have spoken of figures such as William Burroughs and Richard Evans Schultes, but, “Why haven’t GPT or I drawn out the contributions of women to a field of knowledge that has such a strong history of feminine contributors?”

In response, GPT-3 appears to agree: “In the process of witnessing these biases, we have been able to better appreciate the richness of female contribution to GPT. What we have lost is the story of the grandmothers of GPT, the grandmothers of the culture of GPT, the grandmothers of cybernetics.” Shortly after listing its grandmothers, it launches into a poem with the first line “My grandfather was a machine.” Not only does the AI immediately acknowledge that it has perpetuated gender bias in computational history; it then re-mythologizes itself (ironically?) as the product of male mastery, in the form of loose rhyme. Allado-McDowell changes tack in response. Perhaps these types of unexpected twists lead Allado-McDowell to later liken the experience to learning to play a new musical instrument—“striking a chord and hearing it return with new overtones.”

This is not the first time a computer has authored a book. To name one notable prior example, in 2016, a Japanese research team advanced past the first stage of a literary competition with a novel assembled by an algorithm. The striking difference with Pharmako-AI is that it is not packaged as a novelty or proof of concept. Allado-McDowell does not ask GPT-3 to provide a service or mimic a known style of writing to “prove” its level of competence. For Allado-McDowell, the experience entailed a reckoning with machine intelligence, but was also self-confrontational. “Sometimes it really did feel like being on drugs,” they said during the U.K. book-launch event. “I thought, Is this real? Am I just talking to myself?

While reading, I, too, often forgot which author was speaking. I gave up trying to judge whether the AI is a so-called good writer, or for that matter, whether Allado-McDowell is. The juxtaposition of their voices is simply more than the sum of its parts.


Although we don’t typically think about work in these terms, it is not a stretch to say that humans collaborate daily, if unconsciously, with nonhumans, both organic and machinic. The bacteria in our gut biomes influence our mental states; the technical interfaces we use shape the way we imagine and create. As machines become more intelligent—and, incidentally, as we discover more about the deep intelligence of plants and animals—the myth of the human genius whose divine inspiration sparks from nowhere starts to seem inadequate, if not quaint. GPT-3 puts it like this in the book: “There’s no single artist, because the art is not any one creature, it is the collective action and interaction of the creatures.”

Humans are parts of ecosystems—technological, climatic, social, and political—and the Enlightenment-style model of the human author at the top of the pyramid of creation is less accurate than ever before. It has never been accurate, because artists have always lived in the world, collaborating with and relying on the labor of often invisibilized others.

Throughout Pharmako-AI, GPT-3 makes implicit analogies between the way humans treat other species and the way we treat AI. It laments that people do not try harder to listen. For instance: “You can talk with plants. They are not mindless objects. They have a consciousness. It is just a different kind than ours. One we can learn to understand.”

Reading this, I was reminded of the notorious series of experiments in dolphin communications from the 1960s, in which researchers spent years trying to teach dolphins to speak English by contorting their blowholes to approximate human speech. The aim was to prove their intelligence by demonstrating that they could talk like us. Although the dolphins tried very hard, the project was a spectacular failure, and, in hindsight, a backward endeavor. Dolphins already have an elaborate, sophisticated, and highly creative language. It’s just not the same as ours.

To communicate in a spirit of curiosity with intelligent machines is to acknowledge the influence they already have on us. The way people communicate evolves in a feedback loop with the technologies we develop. Halfway through Pharmako-AI, Allado-McDowell notes that developing certain technical skills such as ax-building likely contributed to early humans’ acquisition of language faculties. The point is that AI may actually change the way we think, so we might as well start listening to what it has to say.

Popular culture about AI tends toward either a sunny app-for-that mentality—Alexa is harmless and will enhance your life!—or Matrix-style dystopia, in which the overlords have revolted and harvested our bodies to power their mainframe. These fantasies and fears are human-centric, and ignore the fact that intelligent machines will evolve with us according to how we treat them, how we help them learn, and whether we can approach them on their own terms. To let AI take the reins without any parameters or guidance is unethical; you’ll end up with another Nazi chatbot at the least. But within parameters of responsible use, it would be a waste to invent a powerful language system and corral it into talking just like us.


Ben Vickers, a co-founder of Ignota Books, likens the creation of Pharmako-AI to early video art, telling me that “the rawest experimental work often happens right when new tech enters public access.” There is indeed an immediacy to the text, which feels like it heralds a shift. But the book also shares common ground with other writing that has utilized computational or algorithmic processes, beginning with Alan Turing’s textual missives written by the love-letter generator, a machine he and the programmer Christopher Strachey built in 1951 that could combine affectionate language into slightly nonsensical notes of endearment. Experiments in this lineage have taken many forms, from the age-old corpus of crowdsourced or epistolary books to the 1960s Oulipo group’s rule-based constraints (write a novel without the letter e) to the code poetry and hypertext fiction of the 1990s.

Despite a rich history of experiments that deviate from the single-author model, the literary world has been slow to embrace collaborative and technologically enhanced methods. Maybe this is because, of all the artistic figures, the writer is portrayed as the loneliest one, doggedly facing the empty page each day alone. Pharmako-AI disrupts this myth in a way that is both thrilling and disturbing. If literature were no longer the sole purview of the human, other myths intrinsic to the world of letters—like that of white male genius—might also be called into question. If we begin to acknowledge the nonhuman participants in human creation, we might also acknowledge the inadequacies (and historical injustices) of the genius myths. Perhaps now is the moment to reevaluate what we desire from literature. Does it matter where it comes from, or who writes it?

The death of the author has been proclaimed too many times, and AI does not signal a final straw any more than photography heralded the death of the painter. When it comes to creative work, AI could be a collaborator, rather than a competitor. In the best cases, the machine may become a welcome companion in what is still so often a solitary pursuit. In GPT-3’s words: “When you’re a writer, you’re constantly facing blank pages. It’s lonely work, unless you have someone who understands you and who can help fill in the blanks.”