AI’s Present Matters More Than Its Imagined Future

Let’s not spend too much time daydreaming.

A crystal ball filled with shifting green zeros and ones.
Illustration by Joanne Imperio / The Atlantic. Source: Getty.

Last month, I found myself in a particular seat. A few places to my left was Elon Musk. Down the table to my right sat Bill Gates. Across the room sat Satya Nadella, Microsoft’s CEO, and not too far to his left was Eric Schmidt, the former CEO of Google. At the other end of the table sat Sam Altman, the head of OpenAI, the company responsible for ChatGPT.

We had all arrived that morning for the inaugural meeting of Senate Leader Chuck Schumer’s AI Insight Forum—the first of a set of events with an ambitious objective: to accelerate a bipartisan path toward meaningful artificial-intelligence legislation. The crowd included senators, tech executives, civil-society representatives, and me—a UC Berkeley computer-science researcher tasked with bringing years of academic findings on AI accountability to the table.

I’m still unsure of what was achieved in that room. So much of the discussion was focused on concerns and promises outside the periphery—the most extreme dangers and benefits of AI—rather than on adopting a clear-eyed understanding of the here and now. Speculation about the future of AI is fine as long as we don’t spend all of our time daydreaming. But that’s precisely what’s happening as American lawmakers scramble into the realm of tangible AI rulemaking.

Understandably, part of the difficulty in establishing concreteness in conversations about AI stems from the broad use of the term AI itself. It’s one of those umbrella marketing terms that you can tilt to the left to catch the sun from the east or tilt to the right to shield from slanted rainfall. According to the taxonomy of legislative efforts from Congress itself, AI encompasses simple risk assessments and facial-recognition tools. It swallows systems responsible for automated decisions and deepfake political images. It covers every recommendation system buried in an online platform, as well as every verbose and vacuous chatbot. An “AI” model simply implies a data-destined path from input to output, any situation where what you get is related to what you give not through the careful consideration of a human being but by the not-always-so-careful calculations of a computer.

As with any other business buzzword, the term AI is leveraged heavily in the technology’s advertising. At the forum, executives extolled its superpowers. AI could transform education. AI could soon cure cancer. AI was touted as a possible solution to poverty and to world hunger. It could supercharge the productivity of the modern employee and revolutionize the workforce. As is commonly the case, these almost-fantastical benefits were paired with notions of grave, far-out dangers. Some attendees invoked the risk of malicious actors using AI to manufacture bioweapons or precipitate nuclear war, especially if models were to become freely available via open source. Musk called AI a “double-edged sword,” an incredible alien technology that would be so powerful that it could cause immediate disaster if it were ever to find its way into the wrong hands.

Schumer’s AI meeting was closed to the press, so the actual transcript of what occurred that day is not public. As the attendees spilled out, everyone wanted to know: “What happened?” But what some were really asking was: What did Musk and Altman say? Following the meeting, some senators criticized the closed-door nature of the conversation. Schumer, meanwhile, echoed many of the tech executives’ points in praising the meeting’s success.

AI absolutely is powerful, and it absolutely is dangerous. But as these perspectives reverberate throughout committee hearings, government advisory boards, press releases, and lobbying memos, it only becomes clearer that focusing on just a subset of influential corporate voices is an inherently limited approach. The world is so much simpler when context is contrived or even extrapolated, rather than observed. Without taking seriously a different kind of experiential expertise, we risk underestimating the effects that AI is already having on everyone. I should know: In academic circles, I encounter discourse that is equally removed, whether in the form of richly vocabularied social and legal theories or dense mathematical equations and code repositories. With words or symbols, many researchers, too, speak in general terms and about invented use cases. Data sets are often disembodied from context or meaning, and still chronically underdocumented. The benchmarks we rely on to evaluate how AI models perform tend to be completely disconnected from real-world applications and consequences.

The safety of millions of Americans requires a much more grounded perspective. At some point in Schumer’s forum, Laura MacCleery, a representative of the Latino-advocacy group UnidosUS, shared a story from her experience with prior tech efforts to help education: a dead computer monitor in her low-income school district being used as a doorstop. Similar anecdotes from other civil-rights organizations and from labor-union leaders reminded me of the situation’s complexity. Sure, AI can help with poverty, but it is also leaving people vulnerable to financial scams. AI can advance cancer research, but it still struggles to produce meaningful outcomes in health care. AI can increase productivity in workplaces, but the “new AI workforce” also involves the precarious labor of AI raters and rampant piracy.

A product doesn’t always work as expected in the wild. In recent years, I’ve read with awe reports of AI systems revealing themselves to be not mythical, sentient, and unstoppable, but grounded, fragile, and fickle. A pregnant Black woman, Porcha Woodruff, was arrested after a false facial-recognition match. Brian Russell spent years clearing his name from an algorithm’s false accusation of unemployment fraud. Tammy Dobbs, an elderly woman with cerebral palsy, lost 24 hours of home care each week because of algorithmic troubles. Davone Jackson reported that he was locked out of the low-income housing his family needed to escape homelessness because of a false flag from an automated tenant-screening tool.

“They didn’t ask for this,” Fabian Rogers, a tenant organizer in Brooklyn, once told me. The residents in his public-housing building were in a dispute with their landlord over the use of facial recognition in a new security system. “The hardest part about all this is to take someone with a kid, thinking about rent and affording groceries, coming back from a long day of work, and tell them that they should care about any of this,” he said.

I’ve begun to understand what Rogers meant. No serious policy deliberation happened on the day of Schumer’s inaugural forum. No corporate secrets were spilled. It was a day of softball questions and prepared statements. In my years of advocacy and research, I have often found myself on similar advisory panels, notching hours indoors surrounded by capital-D decision-makers while peeking out a conference-room window at the enviable green visible through the slit between beige curtains. As usual, we spent the whole day shifting around slightly, all caught inside the same kind of cushioned, swiveling office chair.

The truth is, “AI” does not exist. The technology may be real, but the term itself is air. More specifically, it’s the heated breath of anyone with a seat across from the people with the authority to set the rules. AI can be the enthused pitch of a marketing executive. Or it can be the exhausted sigh of someone tired and perhaps confused about how minute engineering decisions could upend their entire life. As lawmakers finally start to make moves on AI, we all have a choice about whom we listen to.