Here’s How AI Will Come for Your Job
Instead of being replaced by robots, office workers will soon be pressured to act more like robots themselves.
Abandon all hope, ye who merge spreadsheet cells! Last week, at its annual I/O conference, Google spent hours detailing how large language models would help the knowledge workers of the world unload their busywork onto a legion of eager, capable neural networks. The company will soon introduce AI functions into programs such as Gmail, Google Sheets, and Google Slides that will allow users to type simple commands and receive complex outputs: entire email compositions, for example, or auto-generated tables. The future that Google is promising feels familiar—it’s all about heightened convenience and one-click efficiency—and I hate it. Workplace AI feels like the purest distillation of a corrosive ideology that demands frictionless productivity from workers: The easier our labor becomes, the more of it we can do, and the more of it we’ll be expected to do.
This is how AI comes for our jobs, one ChatGPT-generated slide deck and inbox integration at a time. It’s a vision of the true AI apocalypse on the horizon that feels more like a soulless grind. Humanity isn’t to be obliterated by a vengeful artificial sentience, and office workers probably won’t be replaced en masse with machines; instead, we will be expected to produce and behave more like robots ourselves. Less Skynet, more Bain & Company.
In its idealized state, generative AI is the ultimate productivity tool. Large language models are intelligent-seeming (if fundamentally unreliable), trained on mountains of information, and eminently capable. They produce LinkedIn-sounding prose that’s perfect for just circling back. Running a ChatGPT window on a work computer has already become akin to writing with spell check for some people. ChatGPT’s Code Interpreter plug-in is able to edit video, pull and analyze information from complex spreadsheets, and build dazzling custom charts and visualizations with a single prompt.
The promise of artificial intelligence is automation, and the promise of automation is to remove friction from the process of production—of typing words, of crunching numbers, of synthesizing information. Generative-AI tools are, in essence, pattern-recognition engines, and their wide deployment is seen by evangelists as the beginning of a rapid expansion of the amount of intelligence in the world, whatever that means. It is a vision of productivity defined by endless possibility.
We’ve seen this one before. Time and again, a piece of technology promises to increase productivity by chipping away at the inefficiencies in our lives. We’re told that it will liberate us—from the tyranny of our inboxes or from toiling on factory floors—and we will recoup our time, the most precious commodity of all. But that time is usually reinvested into more labor. The logic is simple and circular: Increased efficiency frees us up to be more productive. Frederick Winslow Taylor and his stopwatch ruthlessly optimized the factory floor at Bethlehem Steel by surveilling workers and forcing them to eliminate breaks and streamline their motions. The principles of Taylorism changed business and management forever. But its gains weren’t to the benefit of the worker, who was simply driven to produce more each shift.
The story repeats with many prosaic office technologies. Email didn’t dismantle the culture of interoffice memos and workplace correspondence, but it did make them readily accessible all the time. Slack, the corporate email killer, hasn’t unclogged our inboxes. Instead, it is merely another workplace channel workers must tend to—another way to be productive and available to our colleagues and bosses, instantly, at any time. Why should we expect generative AI to free us from this familiar cycle?
In a world where the cost of producing content, correspondence, research, and code approaches zero, it stands to reason that the forces of capitalism would respond by demanding as much of it as possible. And even if humans aren’t the ones producing every solitary word, phrase, sound, or string of numbers, humans will be tasked with generating, editing, and corralling all this synthetic media. If artificial intelligence is coming for our jobs, its plan is to turn us all into middle managers of overlapping, interacting AI systems. The only problem? Middle management is stressful, grinding, usually thankless work. People speak derisively about middle managers because their outputs are hard to define and monitor—they are viewed, sometimes unfairly, as a mere link in the chain.
When I look at a future dominated by generative-AI tools that are embedded in every nook and cranny of industry, I fear the coming grind. I see inboxes crushed under the weight of robot responses and rapid-generated slide decks. A sea of forgettable, lorem-ipsum emails whose sole purpose is to trigger other robots to reply to their polite, authoritative MBA-speak. I see creative industries strip-mined of their humanity in order to create content at the vertiginous scale of a generative-AI internet. What happens to the music industry when anyone can construct a banger of a song in the style of any popular artist? Likely not the destruction of the artist in total, but a devaluation of her skills—yet another technological crisis for the working musician.
One could imagine a future with grueling record-company contracts that demand multiple albums a year from artists, now that they can outsource lyric writing, vocals, and studio sessions. More content means more grease for the algorithmic gears and AI-powered recommendation engines of streaming platforms. The same logic applies to my profession: Why wouldn’t publications expect writers to churn out five or six stories a day, now that they have their own AI-based research and writing assistants? Such a tsunami of forgettable, mass-produced content would, of course, dilute advertising markets and drive down the costs of selling against that content, which would mean a greater need to produce … more content.
You can already see the outlines of this dull, efficient future coming into view. Studios like Netflix are toying with the idea of letting generative-AI programs sketch elements for animated shows, and rumors are circulating in Hollywood that studios are mulling the use of AI to write first drafts (to be punched up later by humans) amid the Writer’s Guild strike. The content sludge is also present in Big Tech’s plans to reimagine search as an interactive, chatbot-powered walled garden. Type a question, get a canonical answer in the voice of a friendly assistant. It’s a process that, as my colleague Damon Beres recently wrote, “makes the internet feel smaller” and, potentially, functions as a dam, holding back search traffic to websites everywhere. In this imagining, search engines don’t need publishers to provide a quality product—they simply need a tonnage of copy to keep the algorithmic machine running.
In 2017, I interviewed Jonathan Albright, a researcher who showed me how he’d stumbled upon a strange phenomenon across YouTube. He’d found a trove of channels comprising tens of thousands of videos. Most were crudely assembled slideshows, using text and images copied from political-news articles across the web. A halting computer voice read quotes from the text as the slideshow played. The channels were publishing new, cookie-cutter videos every three minutes. Most of it was unwatchable. Some of the videos hadn’t registered a view yet, but others had hundreds of thousands of plays. After some digging, he’d found that the videos were generated by an AI to influence YouTube’s recommendation algorithms. The content of the video was irrelevant—what was important was the signal it was sending to the platform: that there was a demand for political news videos.
At the time, I was unnerved by this idea of a shadow economy of robots, making content for robots whose sole purpose was to tilt a platform slightly to one’s favor. Now the shadow economy feels like a template for a generative-AI future. The optimistic argument for these types of productivity tools is always that they unlock human potential and creativity—and they will. But it’s hard to imagine what this looks like at scale. Creativity is an inefficient, nonlinear process. The joy and the magic are in the friction. Productivity is, in many ways, its opposite. And AI is, above all else, a fully realized productivity tool with a mandate to eliminate friction wherever possible. AI is coming for our jobs, our creativity, and our culture—just probably not in the ways you expect. It’s not quite an apocalypse. It’s far more boring than that.