Could using modern scientific tools to separate the soup of moral decision-making—peeking into the brain to see how emotion and reason really operate—shed light on these philosophical questions? The field of moral cognition, an interdisciplinary effort between researchers in social and cognitive psychology, behavioral economics, and neuroscience, has tried to do just that. Since the early 2000s, moral psychologists have been using experimental designs to assess people’s behavior and performance on certain tasks, along with fMRI scans to glimpse the brain’s hidden activity, to illuminate the structure of moral thinking.
One pioneer in this field, the philosopher and Harvard University psychology professor Joshua Greene, combined an iconic and thorny ethical thought experiment—the “trolley problem,” when you must decide whether or not you’d flip a switch, or push a man off a footbridge, to cause one person to die instead of five—with brain imaging back in 2001. Those experiments, and subsequent ones, have helped to demystify the role that intuition plays in how we make ethical tradeoffs—and ultimately showed that moral decisions are subject to the same biases as any other type of decision.
I spoke with Greene about how moral-cognition research illuminates the role of emotion in morality—scientifically, but perhaps also philosophically. Below is a lightly edited and condensed transcript of our conversation.
Lauren Cassani Davis: Your research has revealed that people’s intuitions about right and wrong often influence their decisions in ways that seem irrational. If we know they have the potential to lead us astray, are our moral intuitions still useful?
Joshua Greene: Oh, absolutely. Our emotions, our gut reactions, evolved biologically, culturally, and through our own personal experiences because they have served us well in the past—at least, according to certain criteria, which we may or may not endorse. The idea is not that they’re all bad, but rather that they’re not necessarily up to the task of helping us work through modern moral problems, the kinds of problems that people disagree about arising from cultural differences and new opportunities or problems created by technology, and so on.
Davis: You describe moral decision-making as a process that combines two types of thinking: “manual” thinking that is slow, consciously controlled, and rule-based, and “automatic” mental processes that are fast, emotional, and effortless. How widespread is this “dual-process” theory of the human mind?
Greene: I haven’t taken a poll but it’s certainly—not just for morality but for decision-making in general—very hard to find a paper that doesn’t support, criticize, or otherwise engage with the dual-process perspective. Thanks primarily to Daniel Kahneman [the author of Thinking, Fast and Slow] and Amos Tversky, and everything that follows them, it’s the dominant perspective in judgment and decision making. But it does have its critics. There are some people, coming from neuroscience especially, who think that it’s oversimplified. They are starting with the brain and are very much aware of its complexity, aware that these processes are dynamic and interacting, aware that there aren’t just two circuits there, and as a result they say that the dual-process framework is wrong. But to me, it's just different levels of description, different levels of specificity. I haven't encountered any evidence that has caused me to rethink the basic idea that automatic and controlled processing make distinct contributions to judgment and decision making.