Earlier this year, Mark Zuckerberg hosted an online Q&A session on his personal Facebook page. Over the course of an hour, Zuckerberg fielded questions on topics ranging from the meaning of happiness to whether he was able to “jump over a chair like Bill Gates.” (Answer: “Maybe, but we’re not going to find out today.”) But the answer that made headlines was unprompted and unexpected: Zuckerberg declared that Facebook’s ultimate service was telepathy.
Asked to describe Facebook’s future, he said:
We'll have AR [augmented reality] and other devices that we can wear almost all the time to improve our experience and communication. One day, I believe we'll be able to send full, rich thoughts to each other directly using technology. You'll just be able to think of something and your friends will immediately be able to experience it too if you'd like. This would be the ultimate communication technology.
Facebook’s present system for capturing thoughts—innocently asking us “what’s on your mind?”—suddenly seemed retrograde by comparison.
The “ultimate communication technology” may be a long way off for Facebook users, but it has already been realized by neuroscientists, albeit using some relatively simple thoughts. Last year, scientists at University of Washington published the results of an experiment in which pairs of people, each person located half a mile apart from their partner, used “brain-to-brain” communication to play a computer game as a team.
In each pair, one participant acted as the “sender” of thoughts, watching the game on a screen as an EEG recorded their thoughts about when to fire a virtual gun. The thoughts were then transmitted over the Internet to the “recipient,” who had access to a keyboard with which to fire but could not see the game. Shooting accuracy, which the researchers used to gauge telepathic success, varied among the pairs from 25 to 83 percent.
One way of understanding the enthusiasm for telepathy is to consider its inverse: the growing suspicion of traditional verbal communication. Consider the remarkable rise of emoji, which, according to one British linguistics expert, is “the fastest growing form of language in history, based on its incredible adoption rate and speed of evolution.” In August, Hillary Clinton sent a tweet asking, “How does your student loan debt make you feel? Tell us in three emojis or less.” The tweet was widely mocked, but it was telling that emoji are now a tool leaders use in their efforts to foster emotional intimacy.
Corporations have long been interested in scientific ways to detect emotions for the specific purposes of management and marketing. Not long after psychologists built the first tools for measuring human attention via eye movement in the late 19th century, advertising companies were exploring how they could be used for consumer insight. Surveys and focus groups, which require market researchers to listen to how people feel, are now being supplanted by neuromarketing. Martin Lindstrom, a leading Danish neuromarketing guru, sums up the new philosophy in a simple one-liner: “People lie, brains don’t.” Observe what people really feel, the thinking goes, rather than what they say they feel. Words are perceived as an obstacle to honesty, rather than a means of delivering it.
And just as people no longer need to verbalize how they feel, they may one day be able to satisfy their desires without verbalizing what they want. Within The New York Times’ recent indictment of Amazon’s working culture was a hint of this idea. “A customer was able to get an Elsa doll that they could not find in all of New York City, and they had it delivered to their house in 23 minutes,” one Amazon executive boasted. Why the 23-minute wait? If Amazon aims to connect its customers with products as fast as possible, surely the ultimate service would be to eliminate the delay between wanting and having altogether.
It's not as unrealistic as it sounds. Amazon is already toying with the idea of "predictive shopping", in which goods are delivered to shipping hubs or trucks in anticipation of what consumers will order, based purely on their past shopping behavior. The next stage would be to mail these goods to consumers, before they had even consciously decided to buy them. Research by the legal scholar and former Obama advisor Cass Sunstein has revealed that people are surprisingly enthusiastic about this proposition. The Elsa doll could simply show up on the customer’s doorstep, without so much as a choice or demand being exercised.
“Affective computing” technologies complement this drive to circumvent words. Companies like Affectiva and Realeyes use movement of the face to understand consumers; Beyond Verbal uses intonations in the voice. Ginger.io and Mobile Therapy mine data produced by smartphone users to spot fluctuations in mental health. These are all examples of what the British media scholar Andrew McStay has termed “empathic media.”
Perhaps the most crucial component of our emerging telepathic future, as Zuckerberg’s comment suggests, is the rise of wearables. Some predict that the Apple Watch will track emotions within a few years, while companies like Humanyze already use wearable technology in tandem with social-media analytics to help employers know how their employees are feeling, physically and emotionally. “Brain-to-brain” communication may still be confined to university labs for the time being, but workplaces are filling up with other forms of physiological monitoring that accomplish roughly the same thing.
Lurking beneath the push toward these technologies is a relentless attack on language as unreliable and misleading. The boom in affective computing and wearables—and the various “smart” infrastructures that interface with these technologies—is driven by the promise of access “real” emotions and“real” desires, accompanied by ways of transmitting these via non-verbal codes.
These strategies for circumventing language are examples of what the philosopher Slavoj Zizek has called the “crisis of symbolic efficiency.” Somehow, words no longer seem trustworthy or adequate as ways of representing experience. They don’t grasp the truth firmly enough; they slip and slide around. Best to find some more reliable way of communicating experiences between one brain to another.
There’s a recent precedent for this hostility to language. The birth of cybernetics in the 1940s reimagined communication as a form of predictable interaction between any number of physical things, human and non-human. The Harvard mathematician Norbert Wiener developed the concept—which he defined as “control and communication in the animal and the machine”— following a wartime project aimed at increasing the accuracy of anti-aircraft guns. Wiener surmised that pilots react to being shot at in a predictable, patterned fashion. The gunner and the pilot were effectively communicating with each other, despite no words being exchanged, with the actions of each one influencing the actions of the other. (It seems that shooting accuracy is a recurring gauge of telepathic success in the history of American science).
And in the legacy of the cyberneticians, the purveyors of “smart” technologies promise a form of perfectly predictable interaction between individual and environment, in which nothing needs to be said along the way.
But there is another, less frequently articulated reason why Silicon Valley wants to replace speech. One characteristic of verbal languages is that nobody can own them. Meanwhile, emoji characters are copyrighted, and software can be patented. The machinic capacity to measure emotions via the face or tone of voice is in the possession of businesses, and currently being rapidly capitalized by private-equity investment. Industrial capitalism privatized the means of production. Digital capitalism seeks to privatize the means of communication.
But somebody—a human being—still needs to decide what counts as “happy” or “want” before a machine can be programmed to identify and transmit those concepts. Zuckerberg’s “ultimate communication,” uncluttered by culture or metaphor, would still be mediated by something, designed according to the cultural assumptions of the scientist or programmer. The telepathic fantasy— which is also the ideal of “smartness”— is of a world so wonderfully accommodating to our needs that we never even need to ask for anything. Purveyors of this silent future promise absolute intimacy between self and world—but feelings and desires in their purest form, unencumbered by the messiness of language, will still be filtered through someone else’s lens.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.