Is This The Week AI Changed Everything?

Tech companies certainly say so. The reality is much more dizzying.

A parade of search bars that read "Ask me anything" stretches into the distance.
Matt Chase / The Atlantic

Welcome to the week of AI one-upmanship. On Tuesday, in a surprise announcement, Microsoft unveiled its plans to bring the technology behind OpenAI’s ChatGPT bot to its search engine, Bing. (Remember Bing? Because Bing remembers your jokes.) According to the company, the new tool will be a paradigm shift in the way that humans search the internet. As one early tester demonstrated, the query Find me tickets to a Beyoncé concert in the United States where I won’t need a jacket at night prompts the AI to estimate what constitutes jacket weather, gather tour dates, and then cross-reference those dates with the average temperature in the locations during the time of the show, all to provide a few-sentence answer. In one example from Microsoft’s presentation, Bing helped a user come up with a travel itinerary and then write messages proposing the trip to family members. Clippy, it appears, has touched the face of God.

On its own, all of that would be a lot to take in. But then, one day after Microsoft’s event, Google gave its own presentation for Bard, another generative-AI-powered chatbot search feature. Unlike Microsoft, which is allowing anyone to join a waitlist for the new Bing, Google is releasing the tool to only a group of “trusted testers” to start. But if you believe the press releases and CEO bluster, navigating the internet and accessing information will look completely different in a few mere months.

All of this news is frankly overwhelming. Microsoft’s and Google’s announcements follow last summer’s public debuts of AI art tools including DALL-E 2, Midjourney, and Stable Diffusion, which demonstrated an uncanny ability to create vivid, original images from a simple string of text. And in late November, OpenAI released ChatGPT, which has upended many conceptions of how machines can interact with humans, passing graduate-school exams, flooding the internet with confident bullshit, writing news articles, and helping people get jobs and cheat on tests. It’s hard not to get a sense that we are just at the beginning of an exciting and incredibly fast-moving technological era. So fast-moving, in fact, that parsing what we should be delighted about, and what we should find absolutely terrifying, feels hopeless. AI has always been a mix of both, but the recent developments have been so dizzying that we are in a whole new era of AI vertigo.

Across the internet, technologists and venture capitalists, sensing fortunes to be made, are suggesting that the world is about to be completely reimagined and that the stuff of science fiction is at arm’s reach. Here’s one representative tweet:

At present, the new search tools look like a streamlining of the way we search. Those who’ve had early access to the new, AI-powered Bing have described it as a true change, saying that using it feels akin to the first time they searched something on Google. A product rollout that produces this kind of chatter doesn’t happen often. Sometimes, it signals a generational shift, like the unveiling of Windows 95 or the first iPhone. What these announcements have in common is that they don’t just reimagine a piece of technology (desktop operating systems, phones) but rather create their own gravity, reshaping culture and behaviors around their use.

AI enthusiasts will tell you that the sheer size of these new developments is world-changing. Consider the scale of adoption for products such as ChatGPT, which attracted tens of millions of users in its first two months. Then consider the new scale of AI’s abilities. According to researchers, AI’s computational power is doubling every six to 10 months, well ahead of Moore’s Law. The implication is that, however impressive these tools may feel at present, we’ve barely sniffed what they will be capable of in just weeks’ time. The current hype around OpenAI’s GPT-4 is that it will behave in unrecognizable ways compared with its predecessor, which powers ChatGPT.

That said, everything you’ve read thus far might only be hype. Those who are most vocal about the AI paradigm shift, after all, tend to have a vested interest in the technology’s success. Even the sudden rhetorical pivot from Web3 as the internet’s next savior to AI companies should raise suspicions about exactly how real all of this is. And from what we can see of the new Microsoft and Google products—which are largely unavailable to the general public as of this writing—they are imperfect. ChatGPT’s current model is already infamous for confidently stating false information. Yesterday, Reuters reported that one of Bard’s demo answers, which concerned space telescopes, included a factual inaccuracy.

But even if the information these tools surface isn’t false, that doesn’t mean the tools won’t cause new problems. If these chatbots usher in a genuine search revolution, how will the billions of dollars wrapped up in search advertising be reallocated? It’s hard to imagine that the clean design of these new tools won’t later be overrun by ads or that companies won’t broker their own deals to get priority placement, just as they have across traditional Google Search. And, if the engines offer up full summaries and answers without requiring users to click links, what happens to the vital influx of traffic that search directs toward websites and publishers?

A paradigm shift in how we navigate the internet would likely upend the countless microeconomies that depend on search, which raises the question: Have the AI’s creators—or anyone, for that matter—planned for this kind of disruption? Despite its relatively subdued entry into the AI arms race, Google has been developing its Language Model for Dialogue Applications technology for years—perhaps it hasn’t fully integrated its technology into search because doing so threatens to upend its still-lucrative business.

Already, Google is facing financial repercussions for its Bard presentation: The report of Bard’s factual error caused the company’s stock to slide as much as 9 percent. It also led to arguments over whether Bard was actually wrong. The Financial Times wrote that the answer was only misinterpreted, whereas an astrophysicist insisted that the error was clear and factual. This confusion is a glimpse into our immediate AI future, one in which humans disagree about whether the machines are telling the truth, while fortunes are gained and lost in the process.

Accuracy isn’t the only thing we’ll be fighting about. If you thought the content-moderation battles of the 2010s and the endless Is X a platform or a publisher? debates were exhausting, whatever is next will be more intense. Fights over censorship on platforms such as Facebook and Twitter and on search engines such as Google pale in comparison to the complexity of arguments over how large language models are trained and who is doing the training. For all their faults, our current platforms still surface information for the consumer to peruse, whereas the AI-powered-chatbot model strives to present fully formed answers with limited footnotes—a kind of post-post-truth search engine. The notion that deep neural networks trained on opaque data sets will soon act as the arbiters of information for millions is sure to raise hackles on both sides of the political aisle. (Indeed, a rudimentary version of that culture war is already brewing over ChatGPT.)

For me, all of this uncertain potential for either progress or disaster manifests as a feeling of stuckness. On the one hand, I’m fascinated by what these tools promise to evolve into and, though it’s early, by what they currently claim to do. There’s an excitement bubbling around this technology that feels genuine, especially compared with crypto and Web3 evangelism, which claimed to be fueling a paradigm shift but offered very few compelling use cases.

On the other hand, the fascination is tempered by the speed with which the field is moving and the potential stakes of this change. There’s a discontinuity in the tenor of the AI discourse: True believers suggest that nothing will be the same and that society might not be emotionally, culturally, or even politically ready for what’s next. But these same people are putting their foot on the gas, our readiness be damned. As Microsoft CEO Satya Nadella told the crowd on Tuesday, “The race starts today, and we’re going to move, and move fast.”

AI vertigo comes from trying to balance thorny questions with the excitement posed by a technology that offers to understand us and cater to our whims in unexpected, perhaps unprecedented ways. The idea of generative AI as a new frontier for accessing knowledge, streamlining busywork, and assisting the creative process might exhilarate you. It should also unnerve you. If you’re cynical about technology (and you have every reason to be), it will probably terrify you.

For now, the speed of the change and its destabilizing effects are the most concerning elements of this new era. The possibility of search reorienting itself to privilege computer-generated answers—at a time when users seem more eager than ever to get their answers from real people on places such as Reddit—is nausea-inducing. As the tech critic Michael Sacasas wrote recently, “I’m stuck on the incongruity of populating the world with non-human agents and interfaces that will mediate human experience in an age of mounting loneliness and isolation.”

Feeling AI vertigo doesn’t necessarily mean objecting to the change or the technology, but it does mean acknowledging that the speed feels reckless. Like all transformative technology, AI is evolving without your input. The future is being presented to you whether you consent or not.