IBM’s Watson, the AI system best known for winning Jeopardy, has engaged in creative collaborations, too. It suggested clips from the horror movie Morgan to use for a trailer, for instance, allowing the editor to produce a finished product in a day rather than in weeks.
Eventually, digital assistants may co-author anything from the perfect corporate memo to the next great American novel. Jamie Brew, a comedy writer for the website ClickHole, developed a predictive text interface that takes examples of a literary form and assists in producing new pieces, by giving the user a series of choices for what word to write next. Together he and the interface have churned out a new X-Files script and mock Craigslist ads and IMDb content warnings.
4 | Mutual Understanding
Most machine-learning systems are unable to explain in human terms why they made a decision or what they intend to do next. But researchers are working to fix that. The military’s Defense Advanced Research Projects Agency recently announced a plan to invest significantly in explainable AI, or XAI, to make machine-learning systems more correctable, predictable, and trustworthy. Armed with XAI, your digital assistant might be able to tell you it picked a certain driving route because it knows you like back roads, or that it suggested a word change so that the tone of your email would be friendlier. In addition, with more awareness, “the robot would know when to ask for help,” says Manuela Veloso, the head of Carnegie Mellon’s machine-learning department, who calls this skill “symbiotic autonomy.”
Researchers are developing artificial emotional intelligence, or emotion AI, so that our agents can better understand us, too. Companies such as Affectiva and Emotient (which was bought by Apple) have created systems that read emotions in users’ faces. IBM’s Watson can analyze text not just for emotion but for tone and, over time, for personality, according to Rob High, Watson’s chief technology officer. Eventually, AI systems will analyze a person’s voice, face, posture, words, context, and user history for a better understanding of what the user is feeling and how to respond. The next step, according to Rana el Kaliouby, Affectiva’s co-founder and CEO, will be an emotion chip in our phones and TVs that can react in real time. “I think in the future we’ll assume that every device just knows how to read your emotions,” she says.
5 | Getting Attached
We already know that people can form emotional bonds with Roomba vacuum cleaners and other relatively rudimentary robots. How will we relate to AI agents that speak to us in human voices and seem to understand us on a deep level?
Spivack, the futurist, pictures people partnering with lifelong virtual companions. You’ll give an infant an intelligent toy that learns about her and tutors her and grows along with her. “It starts out as a little cute stuffed animal,” he says, “but it evolves into something that lives in the cloud and they access on their phone. And then by 2050 or whatever, maybe it’s a brain implant.” Among the many questions raised by such a scenario, Spivack asks: “Who owns our agents? Are they a property of Google?” Could our oldest friends be revoked or reprogrammed at will? And without our trusted assistants, will we be helpless?