Once upon a time, before the age of the Internet, we lived in a world of "many economists." If a newspaper reporter was writing a story on inflation, for instance, he or she would call up a number of experts and relay their comments -- not just via quotes -- in an article: "Many economists think inflation is likely to rise in the near future." In other words, we lived in a world where reporters and their editors were the only link between sources and readers. And what were we, the readers, to make of what we read? For the non-expert at least, we could choose to trust it or not, and that was about it.
Fast forward to the present. Those articles still get written, of course, but the dynamic between reader, author and source has drastically changed. If I want to know what economists think about an issue, I can go straight to their blogs. The benefits of this wealth of expert information are so obvious that I won't dwell on them here. But the arrangement presents its own challenges.
By now you may have heard that there is a lot of false or misleading information on the Internet. So your first challenge as a reader is knowing where to look. That may seem easy enough, but it's complicated by a second issue: We have a stubborn tendency to seek out and accept information that conforms to our existing worldview.
A number of articles and books have already been written about the dangers of the Internet in these regards. But while it has subverted many of the traditions of journalism, the Internet has arguably led to the rediscovery of one of its most crucial processes: fact-checking. In the midst of an explosion of opinionated writing, a new class of online arbiter has emerged, led by sites like PolitiFact.com and FactCheck.org.
These sites have become invaluable sources of information about controversial political matters. Perhaps more interestingly, the methods they rely on to sort fact from fiction may serve as a guide to our own efforts to determine reliability as we filter information online.
With that in mind, I spoke with FactCheck.org Director Brooks Jackson over e-mail about the process he and his team use in their work.
While some of the claims Factcheck.org examines are relatively simple, you and your team don't shy away from complex issues. For instance, you did multiple analyses of President Obama's stimulus package. Taking on some of the claims meant wading into some heavy-duty economics. How did you decide to take that on? Do you have economists on staff at Factcheck.org?
We have no economist, or budget to hire one. I've covered economic subjects off and on in Washington for a long time, though. As for deciding to take on a subject -- when the public is subjected to conflicting claims -- the case here -- we seek out the best information we can find to prise out the facts. In this case, the stimulus was the biggest issue of the day for a long time. We try to take on whatever people are wondering about.
In Febuary 2009, Factcheck Wire ran a post outlining disagreements between economists on the stimulus, and deferring on the question of whether or not it would work. Your September 2010 analysis is more confident in its assessment of the bill's job impact. How did your team's approach to analyzing the stimulus evolve over that time?
As we noted in Feburary, at that time economists had very little data. Six months later there was more data available. Also, the CBO had weighed in. We give great weight to CBO analysis because they are scrupulously nonpartisan, they have great expertise (I'm told there are 150 Ph.D.'s on staff, and many economists.) It's important to note that we make no attempt at independent economic analysis on our own -- that's beyond our resources and expertise. What we can do is seek out the best sources of neutral analysis and lay it out for our readers in understandable terms.
Appeals to authority are tricky enough, but part of the debate over the stimulus also required weighing different models of the economy as well as different "schools" of macroeconomic thought. What kind of challenges did these factors present?
We don't try to pick which estimate is right -- we just note the range of credible expert opinion. About all we can say is that there's no doubt the stimulus spending created jobs (contrary to some silly utterances by some Republicans) but we can't be sure how many, or it's a matter of opinion as to whether they were worth the expense.
In your analysis of the stimulus, you cite the Congressional Budget Office. Ezra Klein has written about how the healthcare reform battle may have damaged the CBO's reputation, and has accused Republicans of trying to discredit "the last truly neutral, truly respected scorekeeper in Washington." Is that threat real?
CBO has certainly been attacked by some Republicans who did not like its findings. Does that mean its reputation is damaged? I'm not sure I agree. So far as I can see it still labors as honestly and skillfully as ever to give Congress an accurate picture of the likely budget consequences of proposed legislation.
In your book unSpun and on your site you list as a crucial test of evidence: "Is the source highly regarded and widely accepted?" What impact does today's level of partisanship have on that criteria?
Still a good criteria, even if it becomes harder to find a "widely accepted" source when partisans habitually reject sources that tell them what they don't want to hear. But they aren't the only criteria. With regard to CBO, they still have deep expertise, a record of unbiased nonpartisanship, and a transparent and scholarly methodology going for them.
Do your researchers rely on any sort of hierarchy of sources? Wikipedia's sourcing guidelines, for instance, treat academic and peer-reviewed sources as the most reliable. Is there any good rule of thumb about the reliability of academic work vs. think tanks vs. government vs. private researchers, etc.?
We would also give academic and peer-reviewed articles much greater weight than, say, a Wikipedia article. I don't understand their rule giving primary reliance on "secondary sources," however. We try to get as close as possible to primary sources -- such as a transcript of a news conference or a fresh download of unemployment figures from the BLS. I can't give you a strict hierarchy, however. Even peer-reviewed and scholarly articles can be wrong, as was recently shown to be the case with that horrible study claiming that vaccines cause autism. (How many young lives have been ended or blighted by that evil fraud, which scared many parents away from immunizing their children against some pretty awful diseases?)
Generally, we try never to rely on a single source. And we attribute, so our readers know where our information is coming from.
Factcheck.org has a lengthy list of think tanks along with their leanings and reliability. How did you come up with these descriptions?
We rely heavily on their own descriptions of themselves, the leanings of their leaders and funders, and our own long experience with dealing with many of these groups.
Some think tanks lean left or right by virtue of the policy implications of their research or the views of their staff (like Brookings), while others have stated ideological missions (like Heritage) that place them on one side or the other. Does that distinction matter to their reliability?
I suppose we take these things on a case by case basis. There's a wide range of opinion and focus within Brookings, for example. One author might lay out facts in a straightforward manner while another might be making a one-sided argument for a particular policy outcome. In all cases we try to look behind the arguments and validate the cited facts for ourselves. The Brookings Iraq Index was (and is) a great collection of facts about Iraq, for example. We've found fault with one or two Heritage studies, but found useful facts in others.
When is the government a good source of information? When isn't it? How do you approach using government as a source for information when the larger questions at hand are often dealing with the virtue and competency of government itself?
Agencies like BLS, CBO, the Energy Information Administration, the Internal Revenue Service and some others I could name crank out numbers that are seldom questioned and based on nonpartisan, transparent methods. I don't know of many who question their competence.
Other parts of government are not above spinning the facts for their own ends. I would not call this government ad about Medicare particularly accurate or virtuous, for example: http://www.factcheck.org/2010/07/mayberry-misleads-on-medicare/.
We give great weight to the statistical agencies. Not so much to the political appointees. There's a big distinction there.
We tend to be more skeptical of assertions that run counter to our existing worldview. How can we adjust for this bias of "motivated skepticism"? In such situations, it seems our reasoning capabilities are coming to the service of our emotions, to ill effect. Is it ever the case that we ought to employ less critical thinking?
In unSpun, Kathleen Jamieson and I argue that to keep from being fooled by this common human tendency, its a good idea to keep asking yourself "Am I missing something? Does the other guy have a point here?" It also helps to be aware of this universal psychological tendency, and for teachers to point out examples of it.
Kathleen doesn't like the term "critical thinking" because it implies to some that they should automatically be critical. We prefer "analytical thinking." If you look at it that way, I think there's no danger of being too analytical. I agree that there is a danger of automatically distrusting anything said by people in authority. In that sense, yes, there is a danger of too much "critical" thinking. It's one thing to be skeptical, which is good. It's another to be cynical, which is a sort of naive belief that everybody is lying.
There seems to be a heavy anti-expert thread running through today's American political culture. Do you think that's a feature of the political moment, or an enduring fact of American politics?
Good question, and I don't know the answer. Distrust of authority and disdain for intellectuals is nothing new. Has it reached new heights (or depths)? Will it continue at this level? I don't know.
This article available online at: