Stimulus predictions: put up or shut up
Here's a liberating statement: while I am skeptical of the proposed stimulus bill, I don't know what net impact it would have on the economy.
Apparently a lot of other people are far more confident about their ability to predict this. Many are highly credentialed economists. Some believe stimulus will help a lot, others that it will cause more harm than good. So, while they are all talented at speaking in a grave and impressive tone, it is certain that at least some of these people must be wrong.
Megan McArdle has made the obvious request that they provide predictions of the impact of whatever stimulus bill actually passes, and then we can know who was right and who was wrong.
But the falsification problem is even worse than this. Suppose Ms. Famous Economist X predicts that "Unemployment will be about 10% on 1/1/10 without the bill, and about 8% with the bill". What do you think will happen when New Year's day 2010 rolls around and unemployment is 9.8%? I think it's a very, very safe bet that Ms. X will say something like "Yes, but other conditions deteriorated faster than anticipated (who could have guessed that China would do a massive currency revaluation in the summer of 2009?), so if we hadn't passed the stimulus bill, unemployment would have been more like 12%. So you see, I was right after all; it reduced unemployment by about 2%." This is the problem with such non-experimental sciences - we have no way to measure the counterfactual.
Of course, those who believe that I just don't get the fact that these macroeconomists have been able to sufficiently account for the incredible complexity of our economy must believe that they have, by definition, solved this problem with econometric models that adjust for all relevant economic drivers sufficiently to permit tolerably accurate predictions.
So here's what we would need to falsify a prediction. Anyone who claims to know the impact should escrow a copy of the source code of the econometric model that is used to make the prediction, along with a stated confidence interval, operational scripts, and assumptions for all required non-stimulus inputs that populate the model with a named third-party. Upon reaching the date for which the prediction is made, the third-party should run the model with the actual data for all non-stimulus assumptions and compare the model result to actual. Any difference would be due to model error. We actually still would not be able to partition the sources of error between "error in predicting causal impact of stimulus" and "other", but at least we would have a real measurement of model accuracy for this instance.
Of course, I sincerely doubt this will happen. I wonder why not?