Kacper Pempel / Reuters

A specter is haunting Gmail—the specter of a completed sentence. My fingers tap out the beginning of a message, and a gray phantom appears, with eerie anticipation.

“Thanks for taking [a look!]”

“Tuesday’s no [good, sorry.]”

“Can’t tom[orrow but what about next week?]”

The spectral presence is a technology called Smart Compose. If you’ve used Gmail even once in the past few months, you’ve almost certainly noticed the function, even if you didn’t use it or know its name. Smart Compose is the more advanced kin of another new Gmail technology, called Smart Reply. That’s the name for the boxes that may appear under a new message suggesting a rote reply, such as “Thanks so much!” or “Yes and yes!”

Some technology works, so people like it; and some technology doesn’t work, so people hate it. Google’s Smart Compose belongs to a different category: tech that people hate because it works. Smart Compose has an uncanny ability to auto-complete replies with my exact phraseology—to do precisely the thing for which it is designed—and it is for this very reason that I (and, I’ve gathered, many others) find it so unbelievably creepy.

But what could possibly be so sinister about typing “See y” and then hitting the tab button to have a neural network write “ou later”?

In Google’s defense, I should admit that email is often a waste of human labor. The typical worker receives about 100 emails a day and sends just under 50. The vast majority of these emails should not be sent, and the vast majority of text within even the properly sent emails need not be written. There’s nothing wrong, in theory, with technology that reduces the busywork of email.

What’s more, it’s a little odd to harbor animosity toward a technology of anticipation since, at a high level, the consumer-tech industry is all about building anticipatory networks at scale. Amazon, Netflix, Spotify, and every company in the attention economy anticipate what you’d like to see, hear, and buy next. Software might be eating the world, but anticipatory software is doing most of the chewing. So why direct anger at a mere email auto-completer?

In short, there is a distinction between anticipation and predictability. This is one of the subtleties of any intimate relationship, whether it’s with a spouse or a neural network. It is a joy to feel seen by another person but a horror to be told that your tastes are easily decoded. “You really know me” is a loving expression of intimacy. “Yes, because your preferences are so very predictable” is a rhetorical shiv in the spine. Gmail uses its predictive powers to make its users feel predictable.

Smart Reply and Smart Compose are smart features that have the effect of highlighting just how unsmart we might be. In a recent interview with a source for another story, I brought up my issues with Gmail’s auto-complete function, and we ended up talking about that for several minutes. “It can be so stressful!” he said. “Sometimes I see Gmail suggest a sentence and then I feel like I have to come up with a better sentence than the machine, because I don’t want my response to feel robotic.” In these cases, Smart Compose doesn’t automate the email process or save time at all. Rather, it extends the work of replying to email by alerting writers to the banality of their prose and by establishing a kind of Mendoza line for non-robotic emailing that has to be surpassed before the author can hit send with his soul intact. As the source continued to talk about his email issues, I laughed the nervous laugh of somebody who felt not eerily predicted, but deeply understood.

The optimistic promise of technology is that it allows humans to focus on what really matters, to be “more human.” Farm technology freed most workers from agrarian labor, then manufacturing technology freed more workers from the factory, and thus the labor force has slowly climbed Maslow’s hierarchy toward advanced health care, fine dining, entertainment, and yoga instruction.

But email automation does something quite different. Rather than free people from drudgery so that they can focus on themselves, it directs users’ focus to the very fact of their banality, their robotic tendencies. It says: Ha ha, you email like a droid. I mastered your pathetic email-response style, and it only took me like a second, dweeb. That is, at least, the voice of my own existential angst.

Some might think I’m being histrionic when I say that Gmail’s auto-reply function fills me with existential angst. But I’m trying to be literal. Martin Heidegger, the 20th-century philosopher who wrestled with existentialism before he collapsed into the arms of Nazism, was obsessed with the idea of authenticity—the challenge of fully being oneself, unencumbered by outside influences. He said one of the key giveaways of having “fallen away from [oneself] as an authentic being” was participating in “idle talk”—speaking in a critically unexamined way that reveals nothing special or unique about the individual. Or, in the un-auto-fillable language of Heidegger: “being lost in the public-ness of the ‘they.’”

Google’s auto-reply feature reveals to us the previously hidden vapidity of our communications. That’s why we—or I, at least—hate Smart Compose. Not, again, because the tech is bad. But because it’s good enough to illuminate the exclamation-marked inauthenticity of our correspondence.

As machine learning improves and becomes more promiscuously involved in our day-to-day life, there will be yet more surprising moments of uncanny prediction. Neural networks will get better at anticipating our behaviors and thoughts, based on previous behaviors and thoughts, mapped against the behaviors and thoughts of our broader psychosocial demographic. To see these technologies in action is to be confronted with the fact that we are not so very special or unique. It may be that these algorithms prove that which Heidegger and the existentialists were trying to tell us all along: We are all lost to the they-self, and maybe there wasn’t much of a me-self to begin with.

Or maybe I’m just overthinking it. Thanks for t[aking a look!]

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.