Iraq April 2008

Body Counting

Why even the most-dubious statistics influence our thinking
More

How many Iraqis have died because of the American invasion? It would be nice to know the local price of Saddam Hussein’s ouster, five years on. Many researchers have produced estimates. Unfortunately, these range from 81,020 to 1 million. The wide variance, of course, speaks to considerable uncertainty, although the individual figures are often absurdly precise.

The figure most often quoted, and until recently regarded by many as the most scientific, comes from a study published in TheLancet, a prominent British medical journal, just before the 2006 election. That study, which made headlines worldwide and was cited by war opponents from Ted Kennedy to Al-Jazeera, found that a shocking 601,027 Iraqis had died violent deaths since the U.S. invasion. But the timing of the study’s publication and the size of its estimate have attracted a great deal of criticism; its authors, mostly researchers at Johns Hopkins University, have been accused of everything from bias to outright fraud.

Research by the World Health Organization, published in January in TheNew England Journal of Medicine, has cast further doubt. It covered basically the same time period and used similar statistical techniques, but with a much larger sample and more-rigorous interview methods. It found that the Lancet study’s violent-death count was roughly four times too high. This has a familiar ring to it. A smaller study, released by the Johns Hopkins team in 2004, had been quickly contradicted by a larger UN survey suggesting that it had overstated excess mortality by, yes, about a factor of four.

“Conflict epidemiology,” the study of war’s health effects, is by its nature an inexact science. War and anarchy are not friends to careful, by-the-book research. We have little idea how many people now live in Iraq; ascertaining the number who have died there is a tall order. And huge disparities in death estimates are not unique to the conflict in Iraq; cluster sampling, the best-regarded survey technique for use in war-torn places, has produced estimates in other conflict zones, such as Darfur, that vary by factors of three or more.

All casualty studies have problems. But the Johns Hopkins study’s methodology was particularly troublesome. The number of neighborhoods the team sampled was just above the minimum needed for statistical significance, and the field interviewers rushed through their work. The interviewers were also given some discretion over which households they surveyed, a practice generally regarded as unwise. And though such latitude calls for closer-than-normal supervision of field interviewers, the Johns Hopkins team seems to have provided little. Any of these choices can be defended because of the dangers, and the authors have said as much, claiming, basically, that this was the best they could do in a bad situation.

But that raises an unwelcome question: If this is the best we can do, should we be doing this at all? Cluster sampling was developed for studying vaccination; it has never been validated for mortality. Because of the wide variance in the estimates it produces, some researchers are now questioning its usefulness.

Yet though its compromises made it particularly unreliable, the Lancet study remains the most widely known. Its conclusions were the earliest and most shocking of the scientific estimates and thus generated enormous media attention. The more-careful counts that followed prompted fewer, and less prominent, articles. There’s little doubt that the larger number will live on for years in the writings of antiwar activists. But the rest of us, too, were influenced by it, perhaps more than we realize. We will have to live with its legacy.

Most data create what cognitive scien- tists call “anchoring effects”: we fixate on numbers we’ve heard, even if they’re arbitrary or wrong. In one 1970s experiment, Amos Tversky and Daniel Kahneman (whose work won a Nobel Prize) famously picked a number at random in front of their subjects, by spinning a wheel, and then asked them to guess whether the percentage of African nations in the UN was higher or lower than that number. Next, they asked for a hard estimate of the actual percentage. The higher the random number, the higher the final estimate tended to be, even though the first number had been obviously irrelevant.

These effects persist, infecting our related views, even when the “facts” are subsequently discredited. In one study, for example, experimenters gave students false, negative information about a teacher, but then told them it was incorrect. Nonetheless, when subsequently asked to evaluate that teacher, the students generally turned in worse ratings than did students in a control group that had not heard the bogus information.

Jump to comments
Presented by

Megan McArdle is a columnist at Bloomberg View and a former senior editor at The Atlantic. Her new book is The Up Side of Down.

Get Today's Top Stories in Your Inbox (preview)

Why Did I Study Physics?

In this hand-drawn animation, a college graduate explains why she chose her major—and what it taught her about herself.


Elsewhere on the web

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

Why Did I Study Physics?

Using hand-drawn cartoons to explain an academic passion

Video

What If Emoji Lived Among Us?

A whimsical ad imagines what life would be like if emoji were real.

Video

Living Alone on a Sailboat

"If you think I'm a dirtbag, then you don't understand the lifestyle."

Video

How Is Social Media Changing Journalism?

How new platforms are transforming radio, TV, print, and digital

Video

The Place Where Silent Movies Sing

How an antique, wind-powered pipe organ brings films to life

Feature

The Future of Iced Coffee

Are artisan businesses like Blue Bottle doomed to fail when they go mainstream?

Writers

Up
Down

More in Global

More back issues, Sept 1995 to present.

Just In