Quick! Match the person with the noun:
Woman Test Tube
Husband Liberal Arts
That’s not a real psychology test, of course, but it’s a play on what’s called an “implicit association test,” a type of activity that psychologists ask study participants to perform in order to determine whether they might secretly harbor, in this case, sexist ideas.
In these and similar types of studies, psychologists have long relied on reaction times to hint at hidden prejudices. For example, even avowed feminists might be slower to associate female names with science and technology careers.
“These snap judgments we often make happen quickly, in just hundreds of milliseconds,” Jonathan Freeman, a psychology professor at Dartmouth, told me.
But as far as human measurements go, reaction time is pretty one-dimensional —it only tells the scientist that a person took longer to make a certain choice, but not how they ultimately arrived at their decision. So a few years ago, Freeman sought to tap into the mind of a study subject as he or she wrestled with categorizing, say, a woman with a masculine haircut as a “male” or “female.”
The solution he came up with is a software called MouseTracker. It involves following the subject’s mouse movements as they drag a given object across a screen and toward a pair of descriptor words. For example, here’s that haircut study, as performed with MouseTracker:
The goal is to measure to what extent the participant considered the alternative before ultimately settling on the “right” word. Freeman thinks mouse tracking improves upon older metrics, like response time, because it creates “a continuous stream of rich cognitive output.”
“[Subjects] are pulled toward alternatives that are not chosen but do get considered,” he said. “If we put female features on a male face, they'll gravitate toward the female response before picking male.”
The method has been used by 1,750 researchers worldwide since 2008, according to Dartmouth’s numbers. Some of the studies have found that the curve of the mouse-holder’s waver correlates with fascinating real-life outcomes.
For one recent study, for example, Freeman and other researchers asked a group of volunteers to categorize male and female Senate and gubernatorial candidates’ faces as either male or female:
For the female candidates, but not the male ones, the more the participant was drawn to the “male” label -- that is, the more masculine she looked -- the more likely she was to lose her election. The effect was even more pronounced in conservative constituencies. “We found that only 380 milliseconds after being exposed to a female politician's face, how drawn the hand was to the male response predicted her electoral success or failure" Freeman told me.
The implication is that voters, particularly Republican ones, are reluctant to go for masculine-looking women.
In the real world, this type of research could be used by marketers to test, say, iPhone colors. Or public health experts could correlate regional obesity rates with how quick people are to say they’d rather eat a cupcake than a banana.
It’s worth remembering that there are major limitations to these types of tests: for one thing, they don’t necessarily prove intrinsic prejudice. As law professor Amy Wax and business professor Philip Tetlock pointed out in the Wall Street Journal in 2005, the study subjects’ hemming and hawing before they provide their responses might simply reflect awareness of common stereotypes, rather than support for them. “Not everyone who knows the stereotypes necessarily endorses them,” they wrote.
Social psychology has been plagued by scandal and doubt in recent years. Some critics argue that by mashing and remixing their data, psychologists can provide (often publishable!) evidence of whatever they want. (One such prank was able to “prove” that randomly selected people who hear the Beatles song “When I’m Sixty-Four” are younger than people who don’t—an obviously impossible result.) Researchers running “priming” studies have failed to replicate their findings, and prominent psychologists have had to retract papers or resign for fabricating their data.
In the midst of this social psychology “train wreck,” as psychologist Daniel Kahneman called it, MouseTracker might become notable for its precision. Though it doesn’t prevent the scientist from sending her results through a blender, it does offer a clear yard stick—a line, zigging or zagging to a specific degree—by which to measure, if not true bias, then at least a hand’s interpretation of a brain’s pause.
“It's certainly very sad to me that social psychology has gotten such a bad reputation in the mainstream press,” Freeman said. “I do think that this work is helping or could help social psychology to improve the researchers’ empirical rigor and level of specificity.”
Now that the software has been used in 30 published studies and counting, we can only hope.