How Is a Pundit Like a Mutual Fund Manager?

I know that Paul Krugman was not really serious when he linked this study naming him the most accurate prognosticator in America.  Nonetheless, it's getting some play around the internet, and a warm reception from people who don't seem to know any better, so it's worth pointing out why this sort of thing is so dreadful.  I mean, I'm sure it was a very fine senior project for the Hamilton College students who produced it, but the results tell us nothing at all about the state of prognostication in this country.

Krugman quotes this segment from the Hamilton College press release:

Now, a class at Hamilton College led by public policy professor P. Gary Wyckoff has analyzed the predictions of 26 prognosticators between September 2007 and December 2008. Their findings? Anyone can make as accurate a prediction as most of them if just by flipping a coin.


The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.

The top prognosticators - led by New York Times columnist Paul Krugman - scored above five points and were labeled "Good," while those scoring between zero and five were "Bad." Anyone scoring less than zero (which was possible because prognosticators lost points for inaccurate predictions) were put into "The Ugly" category. Syndicated columnist Cal Thomas came up short and scored the lowest of the 26.

I myself read Paul Krugman more often than Cal Thomas, so perhaps I should take this as evidence of my perspicacity . . . but no.  This is nonsense.  The study runs for a little over a year, between September 2007 and 2008.  They didn't even look at all of the statements made by the prognosticators, but at a "representative sample", presumably because they couldn't handle the volume that would be required to analyze all of it.  Some of the prognosticators made too few testable predictions to generate good results, and the riskiness of the prediction varied--someone who predicted that Obama was going to win the election in October 2008 seems to have gotten the same "score" for that flip as someone who predicted that Obama would do so in September 2007.  The number of predictions varied between commentators, making comparison even more difficult.

Against this background, it makes no sense to say--as the students and the press release do--that this study shows that "a number of individuals in our sample, including Paul Krugman, Maureen Dowd, Ed Rendell, Chuck Schumer, Nancy Pelosi, and Kathleen Parker were better than a coin flip (sometimes, substantially so.)"  One of the commonest fallacies you see among beginning students of probability is the belief that if a coin has a 50% chance of turning up heads, then anyone who flips a coin multiple times should end up getting half heads, and half tails.

This is not true--especially when you have a small number of "flips", as most of the prognosticators did.  (It's not surprising that George Will, who made the greatest number of predictions, was statistically very close to zero.) Rather, if you get a bunch of people to flip coins a bunch of times, you'll get a distribution.  Most of the results will cluster close to 50/50 (as was true in this case), but you'll get outliers.

This is often pointed out in the case of mutual fund managers, as John Bogle does using this graph:


And indeed, my finance profs taught me that the top mutual funds in a given year are not any more likely to show up as next year's top funds.  Indeed, they may be less likely to do well the next year.  Why?  Because funds have strategies, which do better or worse depending on market conditions.  The funds that do well in a given year are probably the funds that were especially well positioned to show outsized fluctuations in response to whatever changed that year--but that also means that they're especially likely to do lose money when those conditions change.  Because the fluctuations are a random walk, they do not vindicate the fund manager's strategy or perspicacity--but they may seem to, temporarily.

Which may cast some light on why liberal pundits did especially well in this test.  If you were the sort of person who is systematically biased towards predicting a bad end for Republicans, and a rosy future for Democrats, then election year 2008 was going to make you look like a genius.  If you were the sort of person who takes a generally dim view of anything Democrats get up to, then your pessimism was probably going to hit more often than it missed.

It would be interesting to go back and look at the same group in the year running up to 2010.  But even then, it would tell us very little.  To do any sort of a true test, we'd have to get a bunch of these prognosticators to all make predictions about the same binary events, over a lengthy period of time, and then see how they fared over a multi-year period. I suspect that they'd end up looking a lot like mutual fund managers: little variation that could be distinguished from random variance.

Once you take into account their fees, mutual fund managers, as a group, underperform the market.  And I suspect you'd see the same thing with pundits: as a group, they'd slightly underperform a random coin flip.  People like Lindsay Graham cannot go on Meet the Press and say "Yup, we're going to lose on November 2nd" even when it is completely obvious that this is what will happen; they need to present an optimistic bias for their base.  Over time, that optimistic bias about no-hope causes will cause a slight negative drag on the predictive power of their statements.

Does that undermine the credibility of pundits?  I don't think that predictions are the fundamental purpose of punditry (though I do encourage people to make them as a way of raising the stakes on the truth claims they make, and in order to give us a benchmark against which to analyze our reasoning).  Pundits offer predictions, yes, but more importantly, they offer you facts, context, and analysis.  Their really important work is to help you make your own wrong predictions about the world.