Your life has almost certainly been affected by Brian Wansink.
Wansink is a professor at Cornell University—for nine more months, before he is to retire, as he described it to me Sunday evening, “sooner and under different circumstances than I expected.”
Others describe it as disgrace, an abrupt fall from a position of great prestige that casts a shadow on a highly consequential but already widely distrusted area of science: how food affects our health.
Wansink has been the director of Cornell’s Food and Brand Lab for a decade, where he studied how our environments determine what and how we eat. He was integral in leading the narrative that obesity and diabetes have less to do with individual willpower or flawed personhood than with psychological manipulation by a food industry that wants to sell as many cheap calories as possible.
He published hundreds of studies that were aimed, from conception through execution, at changing the food environment to help make the more healthful decision easier. In painting the picture of people as highly vulnerable to environmental cues, he famously demonstrated how when eating from “bottomless bowls”—secretly replenished with soup from the inside—people tend to eat more. The act of eating was apparently less about sating hunger than about completing the task of emptying a bowl, and so the takeaway was that people should use smaller bowls and take smaller portions.
Similarly, he reported that enormous buckets of popcorn at movie theaters can make us eat more, even when the popcorn is “not palatable.” He also popularized the concept of “health halos” that certain foods carry to make them seem healthy even when they aren’t—because, for example, they are marketed as “gluten free” even when they are simply brownies.
The popularity and apparent practicality of his work translated into implementation: His lab informed food companies implementing the 100-calorie snack packs you see in stores, for example, under the idea that these smaller portions would get people to eat less. He led the national committee on dietary guidelines and worked to improve the food ecosystems in public schools, the U.S. Army, and Google, among others.
On Thursday, Cornell’s provost, Michael Kotlikoff, issued a statement (touted by a university press release) that said a faculty committee had investigated Wansink and found that he had “committed academic misconduct in his research and scholarship, including misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship.”
The investigation is the culmination of a wide-ranging inquisition into Wansink by journalists and academics that appears to date back to a post on his blog two years ago that many readers took as encouraging his graduate students to engage in a pervasive and dishonest practice of “p-hacking” (manipulating data to find positive results that will be more interesting and amenable to publication than the results that came of the hypothesis the experiment actually set out to test).
This led other researchers to ask Wansink for his data, and that led to apologies and promises to redo his statistical analyses. Eventually his articles started getting retracted. In February of this year, the BuzzFeed News reporter Stephanie Lee published a story titled “Here’s How Cornell Scientist Brian Wansink Turned Shoddy Data Into Viral Studies About How We Eat” that included emails in which Wansink encouraged the same sort of data manipulation. The emails were taken as a sort of smoking gun in an already suspicious situation.
Then, last week, it all came crashing down, with the prestigious journal network JAMA retracting six Wansink studies—bringing his retraction total to 13. What’s more, JAMA did this because it said Cornell University could not “provide assurances regarding the scientific validity” of the studies, not necessarily that they were flawed.
Wansink sent me a rebuttal by email to each of Kotlikoff’s charges that included admitting to “mistakenly reporting the wrong ages for preschool children” and “some typos, transposition errors, and some statistical mistakes.” He also admitted, “We could have done a better job documenting and saving research results, and we came up with new standard operating procedures to correct this.” He concluded:
The interpretation of these four acts of misconduct can be debated, and I did so for a year without the success I expected. There was no fraud, no intentional misreporting, no plagiarism, or no misappropriation. I believe all of my findings will be either supported, extended, or modified by other research groups. I am proud of my research, the impact it has had on the health of many millions of people, and I am proud of my coauthors across the world.
I am not here to adjudicate these cases, or any of the other retractions and corrections, which you can read about at length in the work of many journalists and academics who are more skilled in the forensics of statistical analysis than I, and who can take you as far into the details as you’d like. The job of science journalism is to hold accountable the people who receive the public’s trust and funding, and that work has apparently been done here.
At this point, my question is: How could this happen? How did the system apparently fail to catch this pattern of behavior for so long, and how can this system be corrected to prevent this from happening again?
Taken individually, Wansink’s reported errors and misconduct are not novel or even especially rare. Scan sites like Retraction Watch and see all the bad science that’s happening all the time. We don’t hear about them because the fact of a study being found years later to be flawed is less interesting to most readers of newspapers and magazines than the fact that a study said one simple trick to slimming down your waistline is smaller plates. Even if science editors were interested in publishing stories that aren’t of much interest to their readers, the social-media distribution ecosystem adds an increasingly opaque layer in which those gatekeepers have less and less power to get eyes onto a problem. The people will share what the people will share.
The Wansink saga has forced reflection on my own lack of skepticism toward research that confirms what I already believe, in this case that food environments shape our eating behaviors. For example, among his other retracted studies are those finding that we buy more groceries when we shop hungry and order healthier food when we preorder lunch. All of this seems intuitive. I have used the phrase health halo in my own writing, and am still inclined to think it’s a valid idea.
It’s easy to let down one’s skepticism toward apparently virtuous work. Studies are manipulated and buried and disingenuously designed or executed all the time for commercial reasons, notoriously in domains like pharmaceuticals, where there is a clear incentive to prove that a product is safe and effective—that the years and millions of dollars that went into developing a drug were not wasted, and rather that they were in service of a safe and effective billion-dollar product. But a bulk of the inquiry into Wansink’s research practices centered on a study about getting children to choose fruits and vegetables as snacks if they were marked with stickers bearing popular cartoon characters. Why would someone fabricate a study about how to get kids to eat more fruits and vegetables?
Wansink describes himself as a “pracademic,” an academic aimed at practical problems and workable solutions, and as “a professor whose mission is to help transform people’s lives by finding the small changes that make the big difference.” The portmanteau captures his quirky sensibility and propensity for success in the TED Talk generation. He has a keen eye for studies that make stories, which get attention. He didn’t start out as a nutrition or obesity researcher at all, but as a Ph.D. in consumer behavior and then as an assistant professor of marketing at Dartmouth College before heading to the University of Illinois, where food became part of his vast domain as an endowed chair of “marketing, nutritional science, agricultural economics, and advertising.”
Wansink’s sense for harnessing buzz may have been a skill in the wrong domain. He’s a camera-friendly performer who might’ve done very well as a Bill Nye– or Neil deGrasse Tyson–type infotainer, in an industry where simplification to build a compelling narrative is not a bug but a feature, given the explicit mission to deliver an attention-grabbing-and-holding product to an audience. But buzz is a treacherous acquaintance. When it turns against you, as it inevitably does, the demand for a public execution grows, and the university now roundly condemns and the journals retract articles that they were once willing to publish without having access to the original data.
The other danger of buzz is that if you can get enough, you can become untouchable—like Gwyneth Paltrow or Deepak Chopra, who can say and do and sell most anything, regardless of the validity of their scientific claims. Paltrow has readily admitted that attempts at fact-checking and holding them accountable only make them more popular with their base. When asked how they sleep at night, gurus tend to point to practical results in a few (or even many) cases they have seen—in fans and followers who have told them their life has changed.
Wansink appeared on The Oprah Winfrey Show and The Dr. Oz Show and in a “Got Milk?” campaign ad, but he never reached that level of untouchable buzz. When he had the wind at his back, though, the systems that should have been vetting his work were apparently remiss, and offered to publish studies that would get more buzz—attention for the journals and funding for the university, and even practical impact at a federal level that reflects still more prestige on all involved in the work.
The question is how to disconnect scientific work from the buzz cycle—to let people conduct experiments to answer questions in an environment as free as possible of any incentive that would bias that process. Please tell me if you have the answer. What I know is that it involves more public funding of science, not less, as President Donald Trump has called for. It also requires information-distribution infrastructure that is not dependent on how many people like and fave a certain study’s findings.
I asked Wansink about his insight from the ordeal, and how the system and future aspiring and young scientists could learn from his mistakes. He wrote that he still believes being “a scholar and an academic is an unbelievably great calling. It is totally enriching, and don’t let these events dissuade you from a great career.” He is insistent that existing policies based on his work are sound since, he argues, his fundamental conclusions were correct and will be proven so: “We made a number of mistakes, but they didn’t change the basic conclusions (even if they might have been retracted).”
It’s tempting to believe such things as that smaller portions lead us to eat less. But the real lesson here seems to be that it’s exactly those sorts of findings of which we need to most consciously train ourselves to remain skeptical—to remember that science is about asking questions, not pursuing answers.
While Wansink is generally contrite and apologetic, it’s still not clear if he has internalized that ethos. “You can do research for other academics, or you can do research to solve problems,” he wrote to me. “Doing it for academics is more prestigious, but doing it to solve real problems in the real world is more gratifying—enriching, as I said. Having people say, ‘I do something differently because of your research, and it works’ takes away the sting of someone pointing out the degrees of freedom in an F-test were wrong.”
Effectively now though, the shadow cast by Wansink’s story has worsened a real problem: declining credibility in nutrition and behavioral sciences, and declining public trust in science generally. This is a very practical problem that needs attention. Perhaps from a person who wants a second act and is interested in finding solutions.