STEPHEN BRASHEAR / GETTY

Amazon has patented a new technology that would empower Alexa to monitor users’ emotions, analyzing the pitch and volume of speaker commands, and respond according to how they’re “feeling.” As described in the patent, Alexa may come to recognize “happiness, joy, anger, sorrow, sadness, fear, disgust, boredom, [or] stress” and respond to commands accordingly, maybe with “highly targeted audio content, such as audio advertisements or promotions.”

Patents are not products, of course—but they can offer insight into how companies will approach emergent tech. In this case, the patent hints at new possibilities for dynamic targeted advertising in its always-on line of products. The patent lays out an example: Say you tell Alexa you’re hungry, and she can tell by the sniffle in your voice that you’re coming down with something. She can then ask if you want a recipe for chicken soup, or she can ask a question “associated with particular advertisers.” Perhaps Panera wants to tell you about its soups.

Targeted advertising has traditionally rested on demographics: Makeup is targeted to women, barbecues to men; acne medication to the young, heart medication to the old. Algorithmic profiling has since taken that much farther—advertisers can specifically target, say, single-mother heads-of-household under 25, or West Coast Democrats above 40.

These categories are largely static. But if Amazon had a line of products that continually monitored us, responsive to every shift, the door would be open for devices to relate to us in a much more fluid way—responding to us based not just on who we are, generally, but who we are in any given moment. This is a boon for advertisers: Most of the time, I wouldn’t be interested in buying an Enya album—but if you ask me in the immediate, teary aftermath of an emotional text message exchange with a lover, I’d probably say yes. I may not go to Panera often, but if the idea is suggested to me when I am hungry and feeling sick, maybe I will.

Amazon isn’t the only technology company to pursue technology that takes full advantage of these emotional windows. Google has a similar patent, for a method to augment devices to detect negative emotions;  the devices will then automatically offer advice. IBM has one that would help search engines return web results based on the user’s “current emotional state.” Searching for “good podcasts,” “football,” or “events near me,” for example, would return different results based on user mood, as determined via face recognition in the webcam, a scan of the person’s heart rate or—and this is where the “patents are not products” disclaimer must be emphasized most heavily—the “user’s brain waves.”

Spotify, meanwhile, is already practicing a type of dynamic emotional targeting all its own. Starting in 2014, it began associating playlists with different moods and events, selling ad space to companies based on the associations. An Adele-centric playlist may be a dead giveaway for emotional turmoil, so products associated with sadness (ice cream, tissues) would be recommended. A hip-hop heavy playlist might come with a “block party” association, and Spotify would suggest the playlist for a company advertising barbecue sauce, and so on.

The purpose of profiling is to sell products. Each of us is made up of dozens of marketable categories. Dynamic emotional targeting ups the ante: Now we are collections of categories both stable (gender, age, residence) and in flux (mental and emotional states), and our devices are eager to hear all about it.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.