In trying to unravel what the real story is behind climategate - not the hysterical conspiracy theories but the skeptic's valid concerns about corrupted data and the bias of groupthink - John Tierney is, as always, worth reading. But more and more readers and bloggers are fingering, so to speak, the computer code which is the raw data for the CRU results. That code itself seems deeply problematic, which is not so much a measure of wilful malfeasance as of translating vast amounts of uneven and varied historical data into readable consistent form by scientists already looking for a pattern. In that process, the temptation to massage what is unmassageable may be real - and human (as scientists, for all their pretensions, are). A reader writes:
The really interesting story in Climategate is in the computer code and related files from CRU. See HARRY_READ_ME.txt online at www.di2.nu/foia. What the code and the Read_me file tells us:
(1) CRU's data files of weather-related information gathered at worldwide sites were (are?) a hopeless mess.
(2) The programmer applied arbitrary adjustments to the data (he says so himself) to get the desired results.
Perhaps this is an isolated case of incompetence, and other researchers have good data and programs. Perhaps the research created at CRU was somehow good research, despite appearances. Or perhaps at least some of these scientists are fooling themselves.
Richard Feynman, the brilliant physicist, in his commencement address (PDF) at CalTech related a story of a famous scientist whose published result turned out to be a little bit off.
As he says,
"It's interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher. Why didn't they discover that the new number was higher right away? It's a thing scientists are ashamed of--this history--because it's apparent that people did things like this: When they got a number that was too high above Millikan's they thought something must be wrong--and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that."
Feynman knew that learning not to fool yourself was one of the hardest parts of becoming a scientist.
Scientists are as prone as anyone to taking part in "informational cascades," particularly when they are being funded by granting agencies that reward those who continue in an established line of inquiry (can you imagine funding going to a scientist who found a climate counter-trend?) and when they are trying to publish in peer-reviewed journals whose editorial staff refuse to consider papers that do not come up with the expected results.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.