When the Human and the Artificial Collide
Our fears about technology reflect what we value about personhood: Your weekly guide to the best in books

Humanism is a tradition that prizes, above all, the irreplaceable experience of being a person. In her new book, Humanly Possible, Sarah Bakewell aims to revitalize the philosophy’s emphasis on morality, reason, and optimism. But Bakewell’s book has landed at a curious time, Franklin Foer writes: Her way of thinking is imperiled by a changing culture—especially recent developments in artificial intelligence. As these advancements threaten to upend the primacy of “the faculties of the independent mind, the very core of intellectual personhood,” Foer writes, humanism could use a champion. (Unfortunately, Bakewell’s defense fails to meet the moment.)
Eric Schmidt, who co-wrote a book on AI with Henry Kissinger, told my colleague Charlie Warzel last month that “the reason we’re marching toward this technological revolution is it is a material improvement in human intelligence.” But, as Warzel points out, we don’t actually know how generative AI will change human lives. Matteo Wong considered whether it might become an accelerant for conspiracism or a new way to spread the kinds of disinformation humans have always been susceptible to.
Of course, our unaugmented capabilities are what make large language models such as ChatGPT and its successor, GPT-4, possible. “Every chatbot is created by ingesting books and content that have been published on the internet by a huge number of people,” Wendy Liu writes. “So in a sense, these tools were built by all of us.” As a result, the move to monetize these products is an attempt to privatize what she argues should be collectively owned: “the informational heritage of humanity.”
Our concerns about technological discoveries have always reflected what we actually value about humanity. In the mid-20th century, Isaac Asimov wrote some of American culture’s most influential tales about artificial intelligence; in story after story, his robotic characters long to be real men. Asimov “was, deep down, a humanist,” Jeremy Dauber notes, and the subjects in his stories crave what he cherished: imagination, connection, love. Likewise, Dauber points out, “AI networks … are our creatures as surely as Asimov’s paper-and-ink creations were his own.” They are “machines built to create associations by scraping and scrounging and vacuuming up everything we’ve posted, which betray our interests and desires and concerns and fears.”
Every Friday in the Books Briefing, we thread together Atlantic stories on books that share similar ideas. Know other book lovers who might like this guide? Forward them this email.
When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.
What We’re Reading

Daniel Zender / The Atlantic; Getty
“Between the time that Sarah Bakewell shipped her final draft of Humanly Possible and received finished copies of the book, her subject began to stare squarely at its demise. What her book set out to defend is an intellectual tradition, admittedly ill-defined, that stands for reason, the ennobling potential of education, and the centrality of the ‘human dimension of life,’ as opposed to systems and abstract theories. But in the intervening months, advanced chatbots descended; so did the possibility that they might soon imperil the whole of that enterprise.”

Erik Carter
What have humans just unleashed?
“But, according to experts, to actually parse why a program generated a specific result is a bit like trying to understand the intricacies of human cognition: Where does a given thought in your head come from?”

The Atlantic; source: Getty
Conspiracy theories have a new best friend
“To argue that new technologies, whether social media or AI, are primarily or solely responsible for bending the truth risks reifying the power of Big Tech’s advertisements, algorithms, and feeds to determine our thoughts and feelings. … The messier story might contend with how humans, and maybe machines, are not always very rational; with what might need to be done for writing history to no longer be a war.”

Ben Hickey
AI is exposing who really has power in Silicon Valley
“To recognize that these problems are larger than any one company isn’t to let OpenAI off the hook; rather it’s a sign that the industry and the economy as a whole are built on unequal distribution of rewards. The immense profits in the tech industry have always been funneled toward the top, instead of reflecting the full breadth of who does the work. But the recent developments in AI are particularly concerning given the potential applications for automating work in a way that would concentrate power in the hands of still fewer people.”

Joanne Imperio / The Atlantic
What Isaac Asimov can teach us about AI
“The humanity of Asimov’s robots—a streak that emerges again and again in spite of the laws that shackle them—might just be the the key to understanding them. What AI picks up, in the end, is a desire for us, our pains and pleasures; it wants to be like us. There’s something hopeful about that, in a way. Was Asimov right? One thing is for certain: As more and more of the world he envisioned becomes reality, we’re all going to find out.”
About us: This week’s newsletter is written by Emma Sarappo. The book she’s reading next is When the Angels Left the Old Country, by Sacha Lamb.
Comments, questions, typos? Reply to this email to reach the Books Briefing team.
Did you get this newsletter from a friend? Sign yourself up.