It's common among programmers to say "Garbage In, Garbage Out": your result is only as good as your data. The same could be said of economic models, except the problem is not so much the data, as the incompleteness of the models. Arnold Kling recounts an interesting encounter:
At the unpleasant session yesterday, I did learn something interesting from Doyne Farmer of the Santa Fe Institute, while he was ranting against the state of the art in macroeconomic models. He said that in 2006, the Fed simulated a 20 percent decline in home prices in its model, and the effect was minor.
That sounds highly plausible, of course. But it just adds to my frustration about the infamous Blinder-Zandi black-box simulations purporting to show that the economy would have been much worse without TARP. Such an exercise assumes that we have precise quantitative knowledge of the feedback between real and financial variables. But the exercise that Farmer referred to illustrates just how weak an assumption that is.
It's fine to say "Our best guess is that TARP and the stimulus did some good. But it's well to remember that our best guess really isn't very good. And putting an exact number on it--"3.1 million jobs created or saved!" creates a dangerous false precision, giving people the illusion that we have good knowledge in a very foggy area.
Indeed, it's worth reflecting on the fact that the simulation the Fed ran--and a million others run by regulators, bankers, and investors--probably made the bubble, and the resulting crash, much worse. People thought they knew something they didn't, and it made them complacent. I doubt the unanticipated results of the stimulus will be so devastating, but it's nonetheless important to guard against hubris.