Radio Atlantic: What AI Means for Search

How transformative are the new AI search tools? Are they a new Skynet or just a new Clippy?

The Atlantic's staff writers Amanda Mull and Charlie Warzel

With Google and Microsoft releasing new AI tools, it feels like the future is now with artificial intelligence. But how transformative are products like ChatGPT? Should we be worried about their impact? Are they a new Skynet or just a new Clippy?

Staff writers Charlie Warzel and Amanda Mull discuss.

The following is a transcript of the episode:

Charlie Warzel: It feels like we’re moving toward this search paradigm where search is going to get a little more fun, a little more entertaining, and, possibly, not any more accurate.

And instead, the debates that we’re going to have about who gets to decide what are being entrusted to huge complicated machine-learning models that we have no idea how they’re trained or what they’re gonna do. It kind of sounds like a nightmare.

Amanda Mull: Welcome to Radio Atlantic. I’m staff writer Amanda Mull and I’m here with fellow staff writer Charlie Warzel, who writes about technology for us. Lately, Charlie’s been considering the nascent AI arms race between Google and Microsoft and why, as he put it in a recent piece, Clippy has touched the face of God. Charlie, it’s great to have you.

Warzel: Happy to be here. Happy to talk about Clippy.

Mull: Great. Great. So, Charlie, lately you’ve been writing about and reporting on AI now that Bing and Google have mutually decided to make all of us think about AI. But first I want to talk about finding a bookcase.

Warzel: Let’s do it. Let’s let Bing be our search butler here.

Mull: What I would like to find out is if the Ikea Kallax bookcase—a classic—can fit in the back of a 2002 Volkswagen Jetta.

Warzel: Okay. Alright, it’s searching, it’s still kind of creaking and groaning under the weight of your search. Okay.

Hello. This is Bing. The Ikea Kallax bookshelf has a height of 30 inches or 77 centimeters with a 58 inches, or 147 centimeters and a shelf depth of 15 inches or 39 centimeters. The 2002 Volkswagen Jetta has a trunk space of 47 inches wide, 42 inches deep and 22 inches tall. Based on these dimensions, it seems unlikely that the bookshelf can fit inside the trunk of the car. You may need to disassemble the bookshelf or use another vehicle to transport it.

Mull: It seems like we need to ask Bing to use the packaging dimensions and not the assembled dimensions.

Warzel: Ooh. Okay. It’s searching …

Hello, this is Bing, I apologize for my mistake. The Kallax packaging dimensions are not clearly stated on the IKEA website. However, based on some online reviews and forums, it seems the package size is about 60 inches long, 16 inches wide and six inches thick. The weight of the package is about 70 pounds. I hope this helps you plan your transportation better.

Well, I’m gonna say, “So, can it fit in the back of my 2002 Jetta?” Okay.

It might be possible to fit the package in the back of your 2002 Jetta if you fold down the rear seats and slide it in diagonally. However, it might be a tight fit and you should measure the space more carefully before attempting it. Alternatively, you could use a roof rack or trailer to transport the package more safely and easily.

Based off of me reading that back to you and that being your first presumably AI-chatbot experience. Is that underwhelming? Is that overwhelming? Is that whelming?

Mull: Honestly, it did work a little bit better than I expected for that kind of query.

Warzel: But I also didn’t feel like I was living in the future, right? Like I was kind of just like, Okay, cool.

Mull: Right. But I didn’t expect it to quite so efficiently respond to the query. Honestly. Just straight up.

Warzel: Yeah. So you know, points for Bing.

Mull: Yeah, it’s strange, after hearing so much hype about AI and about the promise of AI search and about how this could change the internet and change our experience with the internet, to finally go through the process of using it and end up like, Oh, that was okay.

Warzel: Yeah.

Mull: Can you tell me in basic, straightforward terms, what it is that these services are doing?

Warzel: So it’s kind of a couple of things that are happening at once that give it this feeling of the smart robot who’s returning your every query. You have the GPT-3, GPT-4, whatever style text model, which is completing based off of all of this information it’s absorbed, completing what’s coming next. Then layered on top of that, you have all that Bing can do or that Google can do with its chatbot in search in general, and it’s kind of mashing those two together. So when we say “Can I fit this IKEA bookshelf into my Jetta?” it’s just doing kind of raw Google searches.

Mull: Yeah.

Warzel: And instead of even giving you links, we’re just going to give you the definitive answer. And that to me feels a little like, Yes, this is helpful. Are we also sort of missing the bigger picture of what people are actually looking for when they’re trying to search?

Mull: Right. It seems like Microsoft and Google in their AI-search efforts are perhaps trying to provide us with God’s perfect Redditor. This is the guy who knows the answer and it’s an AI chatbot who has looked at all of Reddit and condensed it for you.

Warzel: This is honestly one of my big concerns about AI-enabled search in general, which is that we’ve been doing this since, I don’t know, like 2016. Having these really furious debates over content moderation and who gets to say what. And in the world of search, there are all these different ideas about what websites Google is going to elevate. You know, “Is it going to be third parties that Google hires to evaluate the quality of different links when it comes to financial information, medical information, news, or is it going to be engineers at Google themselves? Is Google going to be the arbiter of truth here?”

That was in an environment with search where it was still, “We’re going to give you a slew of links,” right? And then it’s choose-your-own-adventure. This is a new paradigm that essentially says, “We’re going to give you a canonical, definitive answer.” The machine is going to scroll through and it’s going to talk to you like a human being, it’s going to speak really confidently, and it’s going to give you the information to the best of its ability and as we know, these machines right now are really good at confidently lying or bullshitting.

Mull: Yeah. It seems like this has enormous capacity to sort of obscure what is happening in a way that seems not helpful. What do you mean by B.S.? Can you give us an example?

Warzel: There are so many examples of people typing stuff in. “Write me a book report about Animal Farm,” [or] some book, and it will come back with very confidently stated false facts. Someone described it as—and I really like it—“spicy autocomplete.” Which is just that these models have scanned billions and billions of pieces of information from books to commentary to literally everything and it’s just predicting, based off of that, what should come next. And so it’s not necessarily trained to give you these canonical definitive answers. The confidence and the sophistication with which it gives you an answer leads you to believe that the machine’s telling you the truth when it’s stating something that’s false.

Mull: We’re going to take a short break and we’ll be right back.

Mull: Are there any good examples that we’ve seen so far in the extremely early days of these tools that demonstrate what some of the potential pitfalls are?

Warzel: A good early example is that in Google’s demo of its own search chatbot tool, it apparently, in one of its promos, surfaced something about the James Webb Space Telescope that was incorrect.

And Google’s stock price slid by like 8 percent on that day. But then online there was this whole series of people—like astrophysicists—who were debating, “Actually like technically … maybe the Google answer was right.”

And so you kind of got this preview almost of what the future of all this is going to be, which is that, you have the machine confidently stating something and then all these different people arguing about whether or not this was true or false and meanwhile, everyone’s gaining and losing fortunes in the stock market based off of it. It was a very kind of dystopian preview.

Mull: Yeah, yeah, absolutely. And I think that when it comes to shopping, recommendations, confident recommendations, [are] a big existential problem in online commerce. Because we have this vast array of options on the internet and they are too many and too confusing for people to make meaningful sense of them and feel confident in purchasing things, often. So the tone of AI and the confidence with which it says things gives it an opportunity to be especially manipulative in a lot of ways, but with commerce in particular, because often what people go to Google for when they’re trying to figure out if they should buy something is to hear someone confidently tell you, “This will work for you.” It seems like this can be exploited by all the same forces making your Google results now absolutely unusable in a lot of ways.

Warzel: Yeah, I mean, to me what’s really tough to wrap my head around is that the whole promise of these large language models that generative AIs are built off of is that it has read the internet and it has read every great book and it has read every … it’s trained off of more information than you could ever even imagine cramming into a million people’s brains.

And if you start monetizing this, what you’re doing then is you’re, in essence, retraining this thing that is supposedly transformative to say, “Yes, we know that like you are the modern Library of Alexandria. But we would really love it if you put this canine chew toy at the top of the stack,” right?

There’s something that’s so corrupt about that premise that you have the totality of the world’s information at your disposal. And this product is actually the thing that you need to show above everything else because somebody paid for it.

And it’s also, to me we’re adding a whole other level of opacity to this. These models are really hard to understand. And then it’s gonna be very difficult to unravel where the money is coming from, and Google right now, it’s just like, “Hey, this thing this box is green around it and it says paid or it says ad.”

You just, even if you click on it, you kind of know, Oh yeah, that’s how the game works. This could be totally different and sort of insidious and maybe not nefarious, but it’s like the virus of capitalism, infecting the host of the AI or something like that, you know?

Mull: Right, right. When some fundamental structure of the internet—and how regular people use the internet—changes, there’s always so many unintended and unpredictable consequences. Even people who are not particularly interested in tech can look at social media and go, Yeah, the advent of these platforms fundamentally changed the way that a lot of people interact with their friends, interact with politics, interact with information.

It’s hard to imagine a scenario in which this doesn’t have just sort of all kinds of messy, unintended consequences that we probably can’t predict right now. The signal that I think these companies are picking up on is that in order to solve problems, people generally want to interact with other people. You want to be able to walk into a store and ask a question of a salesperson. You want to be able to get someone on the phone to fix a problem for you. You want to ask a knowledgeable individual about something you’re unsure of and benefit from their expertise. It seems like they’re just fundamentally misdiagnosing the actual solutions that people want, which is to have more of their life mediated by interactions with other people than interactions with layers of technology.

Warzel: This is, I mean really and truly always what you butt up against with people who are pushing technological change forward. It is seen as a challenge, right? To create an artificial general intelligence.

That has been the stuff of computer-science dreams for decades and decades and it’s sort of this quest to do this thing that feels almost unthinkable, right? And then of course it’s like, Oh well we need to commercialize it, and then you get into this kind of territory.

Mull: Yeah.

Warzel: One of the true fears that a lot of people have, that I think is very well-founded if we are actually entering a new epoch or whatever you wanna call it of life dominated by artificial intelligences, is that there’s going to be this stratification, this class stratification, where it’s like, “Congratulations, you will be able to see an AI doctor anywhere, as soon as you want and it’s going to do like a fairly reasonably good job of diagnosing you with whatever,” right? It’s going to be a really competent thing.

Meanwhile, the people who have the means and the power and the money, they’re going to have the in-person doctor experience, right? They’re going to actually have the person lay hands on them and look at them and talk to them and ask them those questions and use that expertise and have the bedside manner and all those things. Whereas the rest of us are going to get …

Mull: Doc bot.

Warzel: Doc bot. Exactly, and so I think that that’s a very real thing that goes just beyond even the experience of online consumption to buy products or to get information. It goes to like the very heart of very important services, right?

I would love to not trust my taxes to an AI accountant no matter how good it gets. Or have an AI financial adviser who is responding faster than any other human could to algorithmic triggers in the stock market to buy, sell, and trade. But also, a great thing about a financial adviser who’s not responding instantaneously to algorithmic shocks is they have … they have the ability to be patient, to wait, to use their creativity or their knowledge, all these things. I think that’s the scariest part about the era, is that it does take so much humanity out of what we’re headed toward.

And the human interaction becomes even more and more of a luxury. The fanciest technology billionaires don’t let their kids have screen time because they’re terrified of what the screens are doing to them. That kind of thing repeats, and with the AI it’s weird and possibly quite consequential.

Mull: Yeah, nobody should be going to an AI doctor that’s not widely used by billionaires themselves. I’ll let the VCs go first.

Charlie, thank you so much. This was fascinating and unsettling. I appreciate you walking me through all this.

Warzel: “Fascinating and unsettling” is my wheelhouse. Thank you for having me. I appreciate it.

Mull: This episode of Radio Atlantic was produced by A.C. Valdez, Theo Balcomb, and Kevin Townsend. Claudine Ebeid is the executive producer of audio at The Atlantic. Thanks for listening.