Book Excerpt: What Computers Teach Us About Emotion

Stanford's Clifford Nass has devoted his career to understanding how people interact with computers. In hundreds of papers, one key lesson emerged: we treat computers like people, even though they clearly are not. He used that insight to improve design interfaces, making them friendlier and more helpful.

"What the computer does is allow us to get at that which is most fundamental, most basic, but also most powerful in the way people interact with each other," Nass told me.
"The quote-unquote deficiencies of the computer enable it to come up with rules that will work for anybody," Nass said.
That is to say, if a computer can consistently elicit a certain emotional response from people, it's safe to assume that people could succeed in similar ways. "If there are social rules that work well for the most pathetically unsocial thing you can conceive of -- the computer -- think how much better it's going to work with real people."
Rule number one? People love to be flattered. Here, we present a case study from the book about the exceptionally high value of telling people how great they are. Speaking of which, have I ever told you how smart and successful The Atlantic's audience is. Best readers in the world!
Is Flattery Useful?
My exploration of flattery, then, became the first study in which I used computers to uncover social rules to guide how both successful people and successful computers should behave. Working with my Ph.D. student B. J. Fogg (now a consulting professor at Stanford), we started by programming a computer to play a version of the game Twenty Questions.
The computer "thinks" of an animal. The participant then has to ask "yes" or "no" questions to narrow down the possibilities. After ten questions, the participant guesses the animal. At that point, rather than telling participants whether they are right or wrong, the computer simply tells the users how effective or ineffective their questions have been. The computer then "thinks" of another animal and the questions and feedback continue. We designed the game this way for a few reasons: the interaction was constrained and focused (avoiding the need for artificial intelligence), the rules were simple and easy to understand, and people typically play games like it with a computer.
Having created the basic scenario, we could now study flattery. When participants showed up at our laboratory, we sat them down in front of a computer and explained how the game worked. We told one group of participants that the feedback they would receive was highly accurate and based on years of research into the science of inquiry. We told a second group of participants that while the system would eventually be used to evaluate their question-asking prowess, the software hadn't been written yet, so they would receive random comments that had nothing to do with the actual questions they asked. The participants in this condition, because we told them that the computer's comments were intrinsically meaningless, would have every reason to simply ignore what the computer said. A third control group did not receive any feedback; they were just asked to move on to the next animal after asking ten questions.
The computer gave both sets of users who received feedback identical, glowing praise throughout the experiment. People's answers were "ingenious," "highly insightful," "clever," and so on; every round generated another positive comment. The sole difference between the two groups was that the first group of participants thought that they were receiving accurate praise, while the second group thought they were receiving flattery, with no connection to their actual performance. After participants went through the experiment, we asked them a number of questions about how much they liked the computer, how they felt about their own performance and the computer's performance, and whether they enjoyed the task.
If flattery was a bad strategy, we would find a strong dislike of the flatterer computer and its performance, and flattery would not affect how well participants thought they had done. But if flattery was effective, flattered participants would think that they had done very well and would have had a great time; they would also think well of the flatterer computer.
Participants reported that they liked the flatterer computer (which gave random and generic feedback) as much as they liked the accurate computer. Why did people like the flatterer even though it was a "brownnoser"?
Because participants happily accepted the flatterer's praise: the questionnaires showed that positive feedback boosted users' perceptions of their own performance regardless of whether the feed¬back was (seemingly) sincere or random. Participants even considered the flatterer computer as smart as the "accurate" computer, even though we told them that the former didn't have any evaluation algorithms at all!
Did the flattered participants simply forget that the feedback was random? When asked whether they paid attention to the comments from the flatterer computer, participants uniformly responded "no." One participant was so dismissive of this idea that in addition to answering "no" to the question, he wrote a note next to it saying, "Only an idiot would be influenced by comments that had nothing to do with their real performance."
Oddly, these influenced "idiots" were graduate students in computer science. Although they consciously knew that the feedback from the flatterer was meaningless, they automatically and unconsciously accepted the praise and admired the flatterer. The results of this study suggest the following social rule: don't hesitate to praise, even if you're not sure the praise is accurate. Receivers of the praise will feel great and you will seem thoughtful and intelligent for noticing their marvelous qualities--whether they exist or not.
Excerpted from THE MAN WHO LIED TO HIS LAPTOP: WHAT MACHINES TEACH US ABOUT HUMAN RELATIONSHIPS by Clifford Nass by arrangement with Current, a member of Penguin Group (USA), Inc., Copyright (c) Clifford Nass, 2010.
In a new book, he's inverted that work. Now, he's asking what we can learn from computers about how to be better people. The Man Who Lied to His Laptop comes out today from Current Books.