Bill Easterly has a good post on bad infant mortality stats:
Of the 193 countries covered in the study, the researchers were able to use actual, reported data for only 33. To produce the estimates for the other 160 countries, and to project the figures backwards to 1995, the researchers created a sophisticated statistical model. 
What's wrong with a model? Well, 1) the credibility of the numbers that emerge from these models must depend on the quality of "real" (that is, actual measured or reported) data, as well as how well these data can be extrapolated to the "modeled" setting ( e.g. it would be bad if the real data is primarily from rich countries, and it is "modeled" for the vastly different poor countries - oops, wait, that's exactly the situation in this and most other "modeling" exercises) and 2) the number of people who actually understand these statistical techniques well enough to judge whether a certain model has produced a good estimate or a bunch of garbage is very, very small.
Without enough usable data on stillbirths, the researchers look for indicators with a close logical and causal relationship with stillbirths. In this case they chose neonatal mortality as the main predictive indicator. Uh oh. The numbers for neonatal mortality are also based on a model (where the main predictor is mortality of children under the age of 5) rather than actual data.
So that makes the stillbirth estimates numbers based on a model...which is in turn...based on a model.
In many parts of the world, data is hard to come by. Unfortunately, voters and donors demand data . . . and when it can't be collected effectively, researchers under heavy pressure to come up with numbers are forced to use alternative methods.