Welcome to the Golden Age of Clichés
Chatbots can’t think outside the box, unleashing even more nonsense filler words onto the world.
A few weeks ago, the editor in chief of The Atlantic, Jeffrey Goldberg, sent an email to the newsroom about clichés. To paraphrase: He wanted us to try harder to avoid them, because nobody comes to the website or the print magazine to be bored and annoyed.
Did I think that this email was directed at me, personally, and that it referred to the unacceptable number of clichés in my writing? Well, he included a long list of specific words and phrases to avoid, and several of them looked … familiar. Needless to say, nonsense filler phrases such as needless to say are bound to slip into even not-lazy writing. Clichés aren’t necessarily inherently bad turns of phrase—in fact, they’re generally good, and that’s how they become clichés. The first person to say “in the wake of” should have been proud of themselves. It was an evocative boat metaphor. But now it is horrible.
And, sadly, this human failing is spreading via machine. ChatGPT, the popular bot released to the public by OpenAI late last year, is obsessed with clichés and uses them all the time. Perhaps it is no coincidence that use of the chatbot has already become common in areas of life where people write formulaically and blandly—student essays, cover letters, BuzzFeed quizzes, etc. If I wanted to reapply for my own job, ChatGPT suggests I begin by saying, “As an avid reader of your publication, I am drawn to the high-quality journalism and thought-provoking analysis that The Atlantic consistently delivers.” ChatGPT has been in the news near-constantly during the past several months, because a future full of chatbots raises many complicated, existential questions about how humans can coexist with artificial intelligence. But it also raises another question, to which there is an obvious, simple answer: Are chatbots ushering in a new golden age of clichés? Yes.
ChatGPT can write you anything, but it can’t write you anything good. If you ask ChatGPT to write something that has any kind of tired and played-out associations, they’ll all appear. Dialogue from Survivor: “She put a target on her back.” Toast at a holiday party: “Our team has risen to the occasion time and time again.” High-school-graduation speech: “Once again, congratulations to my fellow graduates. We did it. And as we move on to the next chapter of our lives, let us always remember the words of Ralph Waldo Emerson: ‘Do not go where the path may lead, go instead where there is no path and leave a trail.’” To try to make things a little more complicated, I asked ChatGPT to write a reply to my boss’s email for me. It began by “wholeheartedly” agreeing with his anti-cliché stance and everything else he said. (Good idea.) Then it made a promise: “Moving forward, I will make a concerted effort to steer clear of cliches in my writing.”
Even when I asked specifically for a high-school-graduation speech without clichés, the chatbot wasn’t able to resist. “Today marks a significant milestone in our lives,” ChatGPT told its fellow graduates. “As I reflect on my time here, I am reminded of the words of the great philosopher Socrates, who said, ‘I know that I am intelligent because I know that I know nothing.’ To me, this quote encapsulates the essence of a true education.”
A chatbot isn’t going to come up with many creative turns of phrase on its own, because it’s not, technically speaking, coming up with anything at all. Created using the technology of so-called large language models, ChatGPT starts sentences based on prompts and then predicts the likelihood of each successive word, trying to guess what a person would say based on a massive amount of information it’s seen from the internet. When a bot calculates the probability of one word following another, clichés become very likely, because they’ve appeared so many times before. “When I say ‘the pot calling the kettle—’ it’s really hard not to say ‘black,’” says Melanie Mitchell, a professor at the Santa Fe Institute who often writes about large language models and human reactions to them. “Under the hood, it’s computing probabilities over its entire vocabulary of words as to what word to generate next, given everything that’s come before and your conversation with it,” she told me. “If you start with a single word related to a cliché, the probability is just so high for the next word that even saying ‘Don’t use clichés’ might not be enough to override it.”
The problem isn’t that ChatGPT has no idea what a cliché is. It can tell when a phrase is super common, and it can find out whether a word has been described as being overused. “I have been trained on vast amounts of text, which includes many examples of cliches,” the bot told me when I asked whether it was aware that it was using a lot of clichés. “I can recognize when a phrase or expression is commonly used and may be considered a cliche.” When I asked specifically about some of the items from the editor in chief’s list, ChatGPT agreed that phrases such as across the pond and at the end of the day are “overused to the point of being trite,” and that boots on the ground and cooler heads prevailed appear too frequently in journalism and political writing. ChatGPT was even pretty good with the words our editor had included on his list that weren’t exactly clichés, just irritating. Amongst is “overly formal,” it said, and authored is “pretentious.”
But it sometimes equivocated about whether something was definitely a cliché or not, and it also struggled with clichés that came from specific source material. For example, when I asked ChatGPT about the phrase Reader, I (as in, “Reader, I married him”), it recognized the line from Jane Eyre and said that it was a well-known literary device. “But is it a cliché?” I asked again. “If you’re asking whether ‘Reader, I married him’ is a cliche, then the answer is no, it’s not a cliche,” the bot told me. “This is a famous line from Charlotte Brontë’s novel Jane Eyre, and while it has been widely quoted and referenced, it hasn’t lost its originality or impact.”
Reader, I finally asked ChatGPT the most straightforward version of my question I could think of: “ChatGPT, why do you love clichés so much?” First, the bot reminded me that it’s not accurate to say that an AI language model “loves” anything. Fine. Then it told me that none of the language it generates is a reflection of its own preferences. “The way I respond to prompts is based on the patterns and relationships I have learned from the vast amount of text that I was trained on, which includes both good and bad writing,” it went on. “This means that sometimes, my responses may include cliches or overused phrases simply because they are common patterns in the language I have learned from.” If I would rather the bot use fewer clichés, it suggested, I might try prompting it with some different—presumably more creative—questions or topics.
If I read between the lines (sorry!), that ChatGPT response is almost a personal insult. If its outputs are predictable, that’s because the inputs are too—both the sentences I’m writing now and the billions it was trained on. And as people like me continue playing with chatbots—asking them for a generic piece of writing to help appeal a parking ticket or thank a distant relative for a gift, accepting whatever ordinary phrases they give us—we will bring about the further proliferation of clichés. I asked Mitchell if she feared a super-clichéd future—new chatbots learning about human language from clichéd writing that had been written by old chatbots based on clichéd writing written by people, the clichés multiplying with each round. “There’s been a lot of discussion of that,” she said. “Should these systems be trained on their own output, and where does that lead to?” She’d heard somebody describe it as “regression to the meh.”
When I asked ChatGPT to write a story in the style of The Atlantic about how ChatGPT uses too many clichés, I couldn’t tell whether the result was a joke at its own expense or at mine: “Like a moth to a flame, ChatGPT found itself drawn to the familiar turns of phrase that littered the English language.” For me, there is no light at the end of the tunnel. I know that playfully ending a story about clichés with a bunch of clichés is itself clichéd, but unfortunately, my hands are tied.