Alexis Madrigal is a senior editor at The Atlantic, where he oversees the Technology channel. He's the author of Powering the Dream: The History and Promise of Green Technology. More
The New York Observer calls Madrigal "for all intents and purposes, the perfect modern reporter." He co-founded Longshot magazine, a high-speed media experiment that garnered attention from The New York Times, The Wall Street Journal, and the BBC. While at Wired.com, he built Wired Science into one of the most popular blogs in the world. The site was nominated for best magazine blog by the MPA and best science Web site in the 2009 Webby Awards. He also co-founded Haiti ReWired, a groundbreaking community dedicated to the discussion of technology, infrastructure, and the future of Haiti.
He's spoken at Stanford, CalTech, Berkeley, SXSW, E3, and the National Renewable Energy Laboratory, and his writing was anthologized in Best Technology Writing 2010 (Yale University Press).
Madrigal is a visiting scholar at the University of California at Berkeley's Office for the History of Science and Technology. Born in Mexico City, he grew up in the exurbs north of Portland, Oregon, and now lives in Oakland.
From a research perspective, using computers instead of people to study human interactions reduces the amount of noise that gets produced when people talk to people. Humans are just too specific to yield generally useful data about how to behave. We all know that the strategies that work for charming, good-looking people won't necessarily work for everyone.
"What the computer does is allow us to get at that which is most fundamental, most basic, but also most powerful in the way people interact with each other," Nass told me.
"The quote-unquote deficiencies of the computer enable it to come up with rules that will work for anybody," Nass said.
That is to say, if a computer can consistently elicit a certain emotional response from people, it's safe to assume that people could succeed in similar ways. "If there are social rules that work well for the most pathetically unsocial thing you can conceive of -- the computer -- think how much better it's going to work with real people."
Rule number one? People love to be flattered. Here, we present a case study from the book about the exceptionally high value of telling people how great they are. Speaking of which, have I ever told you how smart and successful The Atlantic's audience is. Best readers in the world!
My exploration of flattery, then, became the first study in which I used computers to uncover social rules to guide how both successful people and successful computers should behave. Working with my Ph.D. student B. J. Fogg (now a consulting professor at Stanford), we started by programming a computer to play a version of the game Twenty Questions.
The computer "thinks" of an animal. The participant then has to ask "yes" or "no" questions to narrow down the possibilities. After ten questions, the participant guesses the animal. At that point, rather than telling participants whether they are right or wrong, the computer simply tells the users how effective or ineffective their questions have been. The computer then "thinks" of another animal and the questions and feedback continue. We designed the game this way for a few reasons: the interaction was constrained and focused (avoiding the need for artificial intelligence), the rules were simple and easy to understand, and people typically play games like it with a computer.
Having created the basic scenario, we could now study flattery. When participants showed up at our laboratory, we sat them down in front of a computer and explained how the game worked. We told one group of participants that the feedback they would receive was highly accurate and based on years of research into the science of inquiry. We told a second group of participants that while the system would eventually be used to evaluate their question-asking prowess, the software hadn't been written yet, so they would receive random comments that had nothing to do with the actual questions they asked. The participants in this condition, because we told them that the computer's comments were intrinsically meaningless, would have every reason to simply ignore what the computer said. A third control group did not receive any feedback; they were just asked to move on to the next animal after asking ten questions.
The computer gave both sets of users who received feedback identical, glowing praise throughout the experiment. People's answers were "ingenious," "highly insightful," "clever," and so on; every round generated another positive comment. The sole difference between the two groups was that the first group of participants thought that they were receiving accurate praise, while the second group thought they were receiving flattery, with no connection to their actual performance. After participants went through the experiment, we asked them a number of questions about how much they liked the computer, how they felt about their own performance and the computer's performance, and whether they enjoyed the task.
If flattery was a bad strategy, we would find a strong dislike of the flatterer computer and its performance, and flattery would not affect how well participants thought they had done. But if flattery was effective, flattered participants would think that they had done very well and would have had a great time; they would also think well of the flatterer computer.
Participants reported that they liked the flatterer computer (which gave random and generic feedback) as much as they liked the accurate computer. Why did people like the flatterer even though it was a "brownnoser"?
Because participants happily accepted the flatterer's praise: the questionnaires showed that positive feedback boosted users' perceptions of their own performance regardless of whether the feed¬back was (seemingly) sincere or random. Participants even considered the flatterer computer as smart as the "accurate" computer, even though we told them that the former didn't have any evaluation algorithms at all!
Did the flattered participants simply forget that the feedback was random? When asked whether they paid attention to the comments from the flatterer computer, participants uniformly responded "no." One participant was so dismissive of this idea that in addition to answering "no" to the question, he wrote a note next to it saying, "Only an idiot would be influenced by comments that had nothing to do with their real performance."
Oddly, these influenced "idiots" were graduate students in computer science. Although they consciously knew that the feedback from the flatterer was meaningless, they automatically and unconsciously accepted the praise and admired the flatterer. The results of this study suggest the following social rule: don't hesitate to praise, even if you're not sure the praise is accurate. Receivers of the praise will feel great and you will seem thoughtful and intelligent for noticing their marvelous qualities--whether they exist or not.Excerpted from THE MAN WHO LIED TO HIS LAPTOP: WHAT MACHINES TEACH US ABOUT HUMAN RELATIONSHIPS by Clifford Nass by arrangement with Current, a member of Penguin Group (USA), Inc., Copyright (c) Clifford Nass, 2010.
Your Privacy"Your remoteness is critical to us"? What the hell could that mean? (Though I have to admit that I love its alienness, language no human would generate.)
Your remoteness is critical to us. To improved strengthen your remoteness we yield this notice explaining a online report practices as good as a choices we can have about a approach your report is picked up as good as used. To have this notice easy to find, we have it accessible upon a homepage as good as during each indicate where privately identifiable report might be requested.
[S]cientists have found that when rats have a new experience, like exploring an unfamiliar area, their brains show new patterns of activity. But only when the rats take a break from their exploration do they process those patterns in a way that seems to create a persistent memory of the experience.This is the only scientific evidence the Times gives that our brains' ability to learn is limited by frequent exposure to digital devices. That's a lot to pin on a few studies in rats.
"Reading my papers and reading the article, there is a big jump there," he said. But he felt comfortable putting the idea out there because there is "a lot of converging other stuff" outside of his own work that suggests filling our little downtime moments with Tweeting or email checking might not be the best idea.
"As far as we can tell, the brain takes advantage of -- speaking colloquially -- 'downtime,'" Frank said. "It's no longer focused on the outside world. It's recapitulating past experience internally."What really matters, though, is not whether we're using a digital device but whether we're focused externally or internally.
"I think it probably is true that we have limited attentional resources and we can choose how much of the time we're focused on something internal versus external," Frank said. "If you do spend all of your time focused on external things, you're less able to allow internal processes to happen. And my guess is that those internal processes are pretty important. But that's not specific to digital devices, that's anything."
Perhaps, Frank did suggest, digital devices might make it easier to distract ourselves.
"There is the potential for low level cognitive engagement with things that could hinder other processes," he said. "That seems reasonably plausible to me, and I' d be surprised if that weren't right in some way."
But other stuff can inhibit those same processes. Reading the paper, paying very close attention to other people talking on the train, listening to talk radio. All of these things could conceivably distract you from letting your mind rest. "My guess -- and this is just a guess -- is that it has much more to do with attentional state than what specifically people are focused on," Frank said.
The Times series may be called "Your Brain on Computers," but one device that predates the digital age may be the one that's particularly bad for your neurons.
"Television causes people's brains to enter a weird state where they are passive but focused," Frank said. "With Television, as far as I understand it, not a lot of higher thinking goes on."
And that's one thing missing from the Times article: a sense of the full breadth of choices humans can make, nearly all of which are technologically mediated.
In their version of the story, it's the lady who watches television and checks her email while on the treadmill versus the trail runner. But what about the person who runs in city streets listening to podcasts (like myself)? What about people who play basketball for exercise? Would they be better off running because the game distracts them from internal processes?
Another thing that's missing from the story: numbers. We don't really know what the scale of the problem we're dealing with is. Are devices impairing our learning a little or a lot? No one really knows, but the Times leaves the impression it's the latter without providing the evidence that that's the case.
If you want to learn more about Frank's work, head to his publications website, which has copies of about a dozen papers and book chapters.
Sign up to receive our free newsletters