The U.N. is using a number that suppresses the true extent of the number of people killed in Syria. Do they have an better alternatives -- and would it even matter if they did?
The death count in Syria's ongoing civil war was revised upwards on Tuesday. Navi Pillay, the United Nations High Commissioner for Human Rights, now says that the toll is "probably now approaching 70,000," an increase of 10,000 from the end of November, when a U.N.-commissioned report found 60,000 individual instances in which a name, date and location of death could be determined. The data set from that report suggested that the true number of dead in the Syria conflict was even higher than that, and one of the report's authors told The Atlantic that the figure was "a very conservative under-count." Pillay's 70,000 number has some relationship to two unknown figures: the number of deaths that can be estimated given currently available information, and the actual number of deaths in the conflict, a total which might not be known for several years (if it is ever conclusively known at all). Both of these numbers are higher than 70,000. Perhaps they're even much higher.
Whether intended or not, Pillay's claim masks the actual gravity of the Syria conflict. The widely-cited 60,000 and 70,000 numbers bear some kind of statistical relationship to the true death count; though at present, we have no idea what that relationship is. The numbers are a reflection of what is currently known about the conflict -- and not, in fact, a reflection of the realities of the conflict. Official and popular adherence to such an obviously deflated figure is troubling, given the enormity of the Syria conflict and the still-unfolding debate over how and whether the United States and the international community should intervene there. A misleading number is now woven into a debate of global importance: because Pillay and the news media are using the 60 or 70,000 figure without any meaningful qualification, the conflict's true humanitarian scope is being unintentionally yet insidiously distorted.
A misleading number is now woven into a debate of global importance. The Syria conflict's true humanitarian scope is being unintentionally yet insidiously distorted.
The day after Pillay made the revision, I spoke with Patrick Ball, the computer scientist who co-authored the January report establishing the 60,000 figure. "We don't think the number of documented deaths is very useful to understanding what's going on Syria," Ball told me, after saying it was "plausible" that the death count was in fact approaching 70,000, as Pillay claimed (and after clarifying that his research team was not the source of Pillay's new number). "It's useful to understanding how documentation in Syria is working. But because we don't know how many deaths are undocumented...we've tried to discourage people from using our report as a way of understanding patterns over time and space." In other words, it would be inaccurate and even a bit irresponsible to determine the current death toll based on a linear extrapolation of the January report's findings. Determining the relationship between the 60,000 reported deaths and both the estimated and actual death toll isn't as simple as tracking where the lines converge.
So what is the alternative? Are we stuck with a misleading sense of the true human toll of one of the most pressing crises on earth? And does it even matter if we are?
The number of people killed in a given conflict is generally determined in one of two ways: through "a census or some sort of populations survey," or through something called "multiple systems estimation," according to Bethany Lacina, a professor at Rochester University and the co-author of a widely-cited dataset of conflict deaths. Under the former method, a per-war population baseline is compared to a post-war survey of the conflict zone. Investigators can do a mortality study, in which they canvas a population for instances of war-related death, information that can then be supplemented with death counts from human rights observers, hospitals, and other sources for a final calculation. Researchers could also perform an excess death survey, which compares a population's expected, non-conflict death rate, or the reported pre-conflict death rate among a surveyed population, to the observed wartime death-rate, a method which takes malnutrition and disease --"nonviolent" causes of death that are nevertheless attributable to wartime conditions -- into account.
It is basically impossible to perform a survey under wartime conditions. Not only it is not worth the risk for social scientists to, for instance, parachute into Aleppo and begin interviewing residents -- such a study would also produce a distorted view of the conflict. Anyone still in Aleppo has witnessed widespread death and destruction; some percentage of the population has already fled. The war could make the most-affected parts of the city inaccessible to researchers, and response bias could cut in both directions, with residents downplaying or exaggerating the atrocities of one or the other side in a still-hot conflict.
There's another, even more fundamental pitfall for mortality surveys or excess death studies: the less fastidious the pre-conflict documentation, the less useful the post-conflict survey results will be. The post-conflict population is less useful an indicator of conflict mortality if the pre-conflict population wasn't accurately counted. And the baseline is everything, especially for an excess death study: if there isn't enough information to reliably determine a population's pre-conflict life expectancy and public health profile, the observed range of deaths will be higher, and the survey results will be less exact. Excess death estimates for the war in the Democratic Republic of Congo range from 1 million to 5.4 million. There was virtually no reliable population or public health survey conducted in Darfur during the pre-war years -- the number of excess deaths during the conflict in western Sudan may never be truly known.
"Even if it was a true average figure how would you know whether the conflict that you're investigating is average?"
Of course, the 1 million and 5.4 million estimates both confirm an appalling amount of death and suffering. The inherent flaws behind conflict death surveys shouldn't prevent researchers from attempting to calculate the number of people killed in war, even when a conflict is still ongoing. There are still things that investigators can do "at the edges," Lacina says. For instance, researchers could interview refugees about conflict mortality, a method which could give investigators a rough picture of a war's severity without forcing them to wait for a conflict to end.
There are problems with this too. Andrew Mack, director of the Human Security Report Project and a former U.N. official, described how surveys performed at refugee camps during the Darfur crisis had the effect of distorting the reported death count. "If you take your surveys shortly after people arrived, they would report the mortality rates that were prevailing before they reached the refugee camps. You would expect them to be a reasonable indicator of death rates from some parts of Darfur," Mack said. "If you went back to those same camps a year and a half later you would find the mortality rates had dropped dramatically." Almost by definition, most refugees will have had a first-hand experience of the war, inflating the reported mortality rate at the beginning of a refugee survey. Then, after a few months, refugee camp services have a tendency to reduce the mortality rate to pre-conflict levels.