Stanford's Clifford Nass has devoted his career to understanding how people interact with computers. In hundreds of papers, one key lesson emerged: we treat computers like people, even though they clearly are not. He used that insight to improve design interfaces, making them friendlier and more helpful.
From a research perspective, using computers instead of people to study human interactions reduces the amount of noise that gets produced when people talk to people. Humans are just too specific to yield generally useful data about how to behave. We all know that the strategies that work for charming, good-looking people won't necessarily work for everyone.
"What the computer does is allow us to get at that which is most fundamental, most basic, but also most powerful in the way people interact with each other," Nass told me.
"The quote-unquote deficiencies of the computer enable it to come up with rules that will work for anybody," Nass said.
That is to say, if a computer can consistently elicit a certain emotional response from people, it's safe to assume that people could succeed in similar ways. "If there are social rules that work well for the most pathetically unsocial thing you can conceive of -- the computer -- think how much better it's going to work with real people."
Rule number one? People love to be flattered. Here, we present a case study from the book about the exceptionally high value of telling people how great they are. Speaking of which, have I ever told you how smart and successful The Atlantic's audience is. Best readers in the world!
Is Flattery Useful?
My exploration of flattery, then, became the first study in which I used computers to uncover social rules to guide how both successful people and successful computers should behave. Working with my Ph.D. student B. J. Fogg (now a consulting professor at Stanford), we started by programming a computer to play a version of the game Twenty Questions.
The computer "thinks" of an animal. The participant then has to ask "yes" or "no" questions to narrow down the possibilities. After ten questions, the participant guesses the animal. At that point, rather than telling participants whether they are right or wrong, the computer simply tells the users how effective or ineffective their questions have been. The computer then "thinks" of another animal and the questions and feedback continue. We designed the game this way for a few reasons: the interaction was constrained and focused (avoiding the need for artificial intelligence), the rules were simple and easy to understand, and people typically play games like it with a computer.
Having created the basic scenario, we could now study flattery. When participants showed up at our laboratory, we sat them down in front of a computer and explained how the game worked. We told one group of participants that the feedback they would receive was highly accurate and based on years of research into the science of inquiry. We told a second group of participants that while the system would eventually be used to evaluate their question-asking prowess, the software hadn't been written yet, so they would receive random comments that had nothing to do with the actual questions they asked. The participants in this condition, because we told them that the computer's comments were intrinsically meaningless, would have every reason to simply ignore what the computer said. A third control group did not receive any feedback; they were just asked to move on to the next animal after asking ten questions.