The U.N. is using a number that suppresses the true extent of the number of people killed in Syria. Do they have an better alternatives -- and would it even matter if they did?
The death count in Syria's ongoing civil war was revised upwards on Tuesday. Navi Pillay, the United Nations High Commissioner for Human Rights, now says that the toll is "probably now approaching 70,000," an increase of 10,000 from the end of November, when a U.N.-commissioned report found 60,000 individual instances in which a name, date and location of death could be determined. The data set from that report suggested that the true number of dead in the Syria conflict was even higher than that, and one of the report's authors told The Atlantic that the figure was "a very conservative under-count." Pillay's 70,000 number has some relationship to two unknown figures: the number of deaths that can be estimated given currently available information, and the actual number of deaths in the conflict, a total which might not be known for several years (if it is ever conclusively known at all). Both of these numbers are higher than 70,000. Perhaps they're even much higher.
Whether intended or not, Pillay's claim masks the actual gravity of the Syria conflict. The widely-cited 60,000 and 70,000 numbers bear some kind of statistical relationship to the true death count; though at present, we have no idea what that relationship is. The numbers are a reflection of what is currently known about the conflict -- and not, in fact, a reflection of the realities of the conflict. Official and popular adherence to such an obviously deflated figure is troubling, given the enormity of the Syria conflict and the still-unfolding debate over how and whether the United States and the international community should intervene there. A misleading number is now woven into a debate of global importance: because Pillay and the news media are using the 60 or 70,000 figure without any meaningful qualification, the conflict's true humanitarian scope is being unintentionally yet insidiously distorted.
The day after Pillay made the revision, I spoke with Patrick Ball, the computer scientist who co-authored the January report establishing the 60,000 figure. "We don't think the number of documented deaths is very useful to understanding what's going on Syria," Ball told me, after saying it was "plausible" that the death count was in fact approaching 70,000, as Pillay claimed (and after clarifying that his research team was not the source of Pillay's new number). "It's useful to understanding how documentation in Syria is working. But because we don't know how many deaths are undocumented...we've tried to discourage people from using our report as a way of understanding patterns over time and space." In other words, it would be inaccurate and even a bit irresponsible to determine the current death toll based on a linear extrapolation of the January report's findings. Determining the relationship between the 60,000 reported deaths and both the estimated and actual death toll isn't as simple as tracking where the lines converge.
So what is the alternative? Are we stuck with a misleading sense of the true human toll of one of the most pressing crises on earth? And does it even matter if we are?
The number of people killed in a given conflict is generally determined in one of two ways: through "a census or some sort of populations survey," or through something called "multiple systems estimation," according to Bethany Lacina, a professor at Rochester University and the co-author of a widely-cited dataset of conflict deaths. Under the former method, a per-war population baseline is compared to a post-war survey of the conflict zone. Investigators can do a mortality study, in which they canvas a population for instances of war-related death, information that can then be supplemented with death counts from human rights observers, hospitals, and other sources for a final calculation. Researchers could also perform an excess death survey, which compares a population's expected, non-conflict death rate, or the reported pre-conflict death rate among a surveyed population, to the observed wartime death-rate, a method which takes malnutrition and disease --"nonviolent" causes of death that are nevertheless attributable to wartime conditions -- into account.
It is basically impossible to perform a survey under wartime conditions. Not only it is not worth the risk for social scientists to, for instance, parachute into Aleppo and begin interviewing residents -- such a study would also produce a distorted view of the conflict. Anyone still in Aleppo has witnessed widespread death and destruction; some percentage of the population has already fled. The war could make the most-affected parts of the city inaccessible to researchers, and response bias could cut in both directions, with residents downplaying or exaggerating the atrocities of one or the other side in a still-hot conflict.
There's another, even more fundamental pitfall for mortality surveys or excess death studies: the less fastidious the pre-conflict documentation, the less useful the post-conflict survey results will be. The post-conflict population is less useful an indicator of conflict mortality if the pre-conflict population wasn't accurately counted. And the baseline is everything, especially for an excess death study: if there isn't enough information to reliably determine a population's pre-conflict life expectancy and public health profile, the observed range of deaths will be higher, and the survey results will be less exact. Excess death estimates for the war in the Democratic Republic of Congo range from 1 million to 5.4 million. There was virtually no reliable population or public health survey conducted in Darfur during the pre-war years -- the number of excess deaths during the conflict in western Sudan may never be truly known.
Of course, the 1 million and 5.4 million estimates both confirm an appalling amount of death and suffering. The inherent flaws behind conflict death surveys shouldn't prevent researchers from attempting to calculate the number of people killed in war, even when a conflict is still ongoing. There are still things that investigators can do "at the edges," Lacina says. For instance, researchers could interview refugees about conflict mortality, a method which could give investigators a rough picture of a war's severity without forcing them to wait for a conflict to end.
There are problems with this too. Andrew Mack, director of the Human Security Report Project and a former U.N. official, described how surveys performed at refugee camps during the Darfur crisis had the effect of distorting the reported death count. "If you take your surveys shortly after people arrived, they would report the mortality rates that were prevailing before they reached the refugee camps. You would expect them to be a reasonable indicator of death rates from some parts of Darfur," Mack said. "If you went back to those same camps a year and a half later you would find the mortality rates had dropped dramatically." Almost by definition, most refugees will have had a first-hand experience of the war, inflating the reported mortality rate at the beginning of a refugee survey. Then, after a few months, refugee camp services have a tendency to reduce the mortality rate to pre-conflict levels.
"In something like 4-6 months, mortality rates in a reasonably well-provided for refugee camp will have come down to the rate prevailing during the prewar period, or actually lower than that," Mack said. NGOs frequently perform refugee surveys in order to determine the needs of the populations they're serving. But they're less useful for counting a conflict's dead.
As Lacina explains, the "fog" that pervades the entire enterprise of establishing a death toll has no quantitative or numerical value - there's no easy way to translate an observed number into an estimate, never mind a definitive, actual number. "The other thing researchers might want is some sort of sense of how much these numbers tend to change between the sort of fog of war and the revision that comes later when people are found and what happened becomes clearer. And that's a subject of intense disagreement."
Similarly, there is no set multiplier for observed deaths and actual death. "There's not even good data on the relationship between direct deaths on the one hand and indirect deaths from disease and malnutrition on the other," says Mack. "The multiplier goes between anything from 2 to 70 or 80," depending on the conflict. Mack pointed out that even if there were a standard multiplier, it wouldn't be terribly useful. "Even if it was a true average figure how would you know whether the conflict that you're investigating is average or not?."
Luckily, there's "multiple systems estimation," which, in a rather macabre irony, is related to methods used to track wildlife population sizes. As Lacina explains, researchers tag animals one year, recapture them over subsequent years, and use the observed probability of recovering an animal in a given year to a population size. Researchers essentially calculate discrepancies within their own methodology in order to reach a more accurate sense of the population they are dealing with. In the case of conflict death calculations, researchers can look at the frequency with which names appear on individual human rights monitors' lists to determine the probability of appearing on no list, with the aim of calculating a "population size" of the dead. Intuitively, a high frequency of names appearing on only one or two lists -- rather than four or five -- suggests a high probability of not being counted. In contrast, if all names appeared on all lists, it would suggest near-flawless documentation.
It isn't quite as simple as that, as Patrick Ball explained. What works in a nature preserve doesn't necessarily work in a conflict zone. Take the lists, for instance: calculating a simple probability ignores the way in which the different data sets -- in this case, lists of people killed in the conflict, provided by various human rights monitors -- relate to one another. "The logic is we want to get the best prediction of how this interaction process works between these systems," says Ball.
Ball's task is determine a population size based on detailed but nevertheless incomplete information. "What a statistician wants to do is say ok, let's divide the world into little slices of time and space, so we can make estimates over time and space," says Ball. Researchers then test various mathematical assumptions within these individual "strata" -- a "strata" being, for example, Aleppo in March of 2012. This requires math so complicated that Ball resorted to metaphor in order to explain it.
"Imagine that you have two dark rooms, and you want to know how big the rooms are," he said. "You can't go into the rooms and measure. They're dark, and you can't see inside them, but what you do have is a bunch of little rubber balls." Throw the balls in one room, and you hear frequent hollow pinging sounds as they bump into each other. Throw them into the next room, and the sound is less frequent. Even though you can't see inside of either room, you can intuit that the latter room is the larger of the two.
The dark rooms are Aleppo in March of 2012. The rubber balls are the various data sets. The act of throwing the balls into the dark rooms is akin to the complex mathematical analysis that allows Ball to "see how frequently the data set encounters itself," and the aggregate results from every "strata" will be an estimate of the number of people killed in Syria's civil war. He says his team will have this estimate in another two to three months.
For civilians in Syria, it's unclear how any of this matters. It's unclear how it even matters to global decision makers with the potential ability to hasten the end of Syria's civil war.
For people living under constant threat of artillery fire, the question of whether 70 or 100 or even 500 thousand of their fellow countrymen have already died must seem like an abstract matter, a number wholly divorced from the urgency of the current situation. As for the decision-makers -- the uptick in the death count hasn't been enough to convince the Russians to stop arming the Assad regime, or the NATO states to enforce a no-fly zone, or create civilian safe areas along the Turkish and Jordanian borders. The war has convinced some of the Gulf States to aid certain rebel groups, and it's convinced an Al Qaeda in Iraq affiliate to take a leading role in the resistance to the regime of Bashar al Assad. But these are actions disconnected from the actual death toll -- strategic calculations, rather than purely humanitarian decisions.
Figuring out the death toll during a hot conflict has practical utility. "You actually need to have this data if you're going to have serious needs assessments for humanitarian purposes," says Mack. In terms of political and moral impact, the aforementioned difference between 70 or 100 or 500 thousand deaths is disturbingly hard to identify. The question might itself be distracting.
Samuel Moyn, a Columbia University professor and author of an intellectual history of human rights, says that death counts can have the effect of deflecting attention from the cultural and political factors that help shape society's response to atrocity. "We could think, why is this case, whether it's 60 or 70 thousand, leading us to the brink of this debate about intervention, when this really wasn't something that concerned us in other cases?" Moyn says there's a need "to make sure we're not getting misled by our outrage and our attempt to quantify," and to "think hard about what's really driving our response."
These underlying concerns could eventually convince U.S. policymakers to intervene in Syria. Or it could convince them to sit the conflict out. Moyn doesn't mean that the death count is some abstract or irrelevant issue -- just that the degree of public attention towards it feeds into and hints at other, more fundamental questions, some of which could have a direct bearing on the U.S.'s actions in Syria. "I would prefer a more open discussion that cares about individual and mass human death, but in a context that says we've already been involved [in the Middle East]," says Moyn. "Ultimately the people who are still alive and what kind of regime they get in the long run is what matters."
The people who are already dead matter too, for reasons that transcend exigency or politics; reasons that are grounded in morality, but are hardly abstract. They matter because every person matters -- and, by extension, every death matters.
Paradoxically is hard to explain how or why this is -- to articulate the moral urgency of counting the dead -- without retreating into the abstractions of religion, poetry, or philosophy. But Moyn pointed me to a wrenching passage in Timothy Snyder's Bloodlands, the Yale Professor's acclaimed history of Hitler and Stalin's atrocities in central Europe in the 1930s and 40s, that at least tries:
Cultures of memory are organized around round numbers, intervals of ten, but somehow the remembrance of the dead is easier when the numbers are not round, when the final digit is not a zero. So within the Holocaust, it is perhaps easier to think of 780,863 different people at Tebrlinka: where the three at the end might be Tamara and Itta Willenberg, whose clothes hung together after they were gassed, and Ruth Dorfmann....
Within the history of mass killings in the bloodlands, recollection must include the one million (times one) Leningraders who starved during the siege, the 3.1 million (times one) distinct prisoners of war killed by the Germans in 1941-1944, or the 3.3 million (times one) distinct Ukrainian peasants starved by the Soviet regime in 1932-1933. These numbers will never be known to precision, but they include individuals, too: peasant families making fearful choices, prisoners keeping each other warm in dugouts, children such as Tania Savicheva watching their families perish at Leningrad....
The Nazi and Soviet regimes turned people into numbers, some of which we can only estimate, some of which we can reconstruct with fair precision. It is for us as scholars to seek these numbers and put them in perspective. It is for us as humanists to turn the numbers back into people.
The people counting the dead in Syria are human rights observers, scientists and mathematicians -- not poets and humanists. The numbers they come up with will not be perfect. But the first step in "turning the numbers back into people" is having a number in the first place.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.