I know that Paul Krugman was not really serious when he linked this study naming him the most accurate prognosticator in America. Nonetheless, it's getting some play around the internet, and a warm reception from people who don't seem to know any better, so it's worth pointing out why this sort of thing is so dreadful. I mean, I'm sure it was a very fine senior project for the Hamilton College students who produced it, but the results tell us nothing at all about the state of prognostication in this country.
Krugman quotes this segment from the Hamilton College press release:
Now, a class at Hamilton College led by public policy professor P. Gary Wyckoff has analyzed the predictions of 26 prognosticators between September 2007 and December 2008. Their findings? Anyone can make as accurate a prediction as most of them if just by flipping a coin.
The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.
The top prognosticators - led by New York Times columnist Paul Krugman - scored above five points and were labeled "Good," while those scoring between zero and five were "Bad." Anyone scoring less than zero (which was possible because prognosticators lost points for inaccurate predictions) were put into "The Ugly" category. Syndicated columnist Cal Thomas came up short and scored the lowest of the 26.
I myself read Paul Krugman more often than Cal Thomas, so perhaps I should take this as evidence of my perspicacity . . . but no. This is nonsense. The study runs for a little over a year, between September 2007 and 2008. They didn't even look at all of the statements made by the prognosticators, but at a "representative sample", presumably because they couldn't handle the volume that would be required to analyze all of it. Some of the prognosticators made too few testable predictions to generate good results, and the riskiness of the prediction varied--someone who predicted that Obama was going to win the election in October 2008 seems to have gotten the same "score" for that flip as someone who predicted that Obama would do so in September 2007. The number of predictions varied between commentators, making comparison even more difficult.
Against this background, it makes no sense to say--as the students and the press release do--that this study shows that "a number of individuals in our sample, including Paul Krugman, Maureen Dowd, Ed Rendell, Chuck Schumer, Nancy Pelosi, and Kathleen Parker were better than a coin flip (sometimes, substantially so.)" One of the commonest fallacies you see among beginning students of probability is the belief that if a coin has a 50% chance of turning up heads, then anyone who flips a coin multiple times should end up getting half heads, and half tails.
This is not true--especially when you have a small number of "flips", as most of the prognosticators did. (It's not surprising that George Will, who made the greatest number of predictions, was statistically very close to zero.) Rather, if you get a bunch of people to flip coins a bunch of times, you'll get a distribution. Most of the results will cluster close to 50/50 (as was true in this case), but you'll get outliers.
This is often pointed out in the case of mutual fund managers, as John Bogle does using this graph:
And indeed, my finance profs taught me that the top mutual funds in a given year are not any more likely to show up as next year's top funds. Indeed, they may be less likely to do well the next year. Why? Because funds have strategies, which do better or worse depending on market conditions. The funds that do well in a given year are probably the funds that were especially well positioned to show outsized fluctuations in response to whatever changed that year--but that also means that they're especially likely to do lose money when those conditions change. Because the fluctuations are a random walk, they do not vindicate the fund manager's strategy or perspicacity--but they may seem to, temporarily.
Which may cast some light on why liberal pundits did especially well in this test. If you were the sort of person who is systematically biased towards predicting a bad end for Republicans, and a rosy future for Democrats, then election year 2008 was going to make you look like a genius. If you were the sort of person who takes a generally dim view of anything Democrats get up to, then your pessimism was probably going to hit more often than it missed.
It would be interesting to go back and look at the same group in the year running up to 2010. But even then, it would tell us very little. To do any sort of a true test, we'd have to get a bunch of these prognosticators to all make predictions about the same binary events, over a lengthy period of time, and then see how they fared over a multi-year period. I suspect that they'd end up looking a lot like mutual fund managers: little variation that could be distinguished from random variance.
Once you take into account their fees, mutual fund managers, as a group, underperform the market. And I suspect you'd see the same thing with pundits: as a group, they'd slightly underperform a random coin flip. People like Lindsay Graham cannot go on Meet the Press and say "Yup, we're going to lose on November 2nd" even when it is completely obvious that this is what will happen; they need to present an optimistic bias for their base. Over time, that optimistic bias about no-hope causes will cause a slight negative drag on the predictive power of their statements.
Does that undermine the credibility of pundits? I don't think that predictions are the fundamental purpose of punditry (though I do encourage people to make them as a way of raising the stakes on the truth claims they make, and in order to give us a benchmark against which to analyze our reasoning). Pundits offer predictions, yes, but more importantly, they offer you facts, context, and analysis. Their really important work is to help you make your own wrong predictions about the world.
“Here is what I would like for you to know: In America, it is traditional to destroy the black body—it is heritage.”
Last Sunday the host of a popular news show asked me what it meant to lose my body. The host was broadcasting from Washington, D.C., and I was seated in a remote studio on the far west side of Manhattan. A satellite closed the miles between us, but no machinery could close the gap between her world and the world for which I had been summoned to speak. When the host asked me about my body, her face faded from the screen, and was replaced by a scroll of words, written by me earlier that week.
The host read these words for the audience, and when she finished she turned to the subject of my body, although she did not mention it specifically. But by now I am accustomed to intelligent people asking about the condition of my body without realizing the nature of their request. Specifically, the host wished to know why I felt that white America’s progress, or rather the progress of those Americans who believe that they are white, was built on looting and violence. Hearing this, I felt an old and indistinct sadness well up in me. The answer to this question is the record of the believers themselves. The answer is American history.
Defining common cultural literacy for an increasingly diverse nation.
Is the culture war over?
That seems an absurd question. This is an age when Confederate monuments still stand; when white-privilege denialism is surging on social media; when legislators and educators in Arizona and Texas propose banning ethnic studies in public schools and assign textbooks euphemizing the slave trade; when fear of Hispanic and Asian immigrants remains strong enough to prevent immigration reform in Congress; when the simple assertion that #BlackLivesMatter cannot be accepted by all but is instead contested petulantly by many non-blacks as divisive, even discriminatory.
And that’s looking only at race. Add gender, guns, gays, and God to the mix and the culture war seems to be raging along quite nicely.
As the world frets over Greece, a separate crisis looms in China.
This summer has not been calm for the global economy. In Europe, a Greek referendum this Sunday may determine whether the country will remain in the eurozone. In North America, meanwhile, the governor of Puerto Rico claimed last week that the island would be unable to pay off its debts, raising unsettling questions about the health of American municipal bonds.
But the season’s biggest economic crisis may be occurring in Asia, where shares in China’s two major stock exchanges have nosedived in the past three weeks. Since June 12, the Shanghai stock exchange has lost 24 percent of its value, while the damage in the southern city of Shenzhen has been even greater at 30 percent. The tumble has already wiped out more than $2.4 trillion in wealth—a figure roughly 10 times the size of Greece’s economy.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
A new book by the evolutionary biologist Jerry Coyne tackles arguments that the two institutions are compatible.
In May 1988, a 13-year-old girl named Ashley King was admitted to Phoenix Children’s Hospital by court order. She had a tumor on her leg—an osteogenic sarcoma—that, writes Jerry Coyne in his book Faith Versus Fact, was “larger than a basketball,” and was causing her leg to decay while her body started to shut down. Ashley’s Christian Scientist parents, however, refused to allow doctors permission to amputate, and instead moved their daughter to a Christian Science sanatorium, where, in accordance with the tenets of their faith, “there was no medical care, not even pain medication.” Ashley’s mother and father arranged a collective pray-in to help her recover—to no avail. Three weeks later, she died.
In 1992, the neuroscientist Richard Davidson got a challenge from the Dalai Lama. By that point, he’d spent his career asking why people respond to, in his words, “life’s slings and arrows” in different ways. Why are some people more resilient than others in the face of tragedy? And is resilience something you can gain through practice?
The Dalai Lama had a different question for Davidson when he visited the Tibetan Buddhist spiritual leader at his residence in Dharamsala, India. “He said: ‘You’ve been using the tools of modern neuroscience to study depression, and anxiety, and fear. Why can’t you use those same tools to study kindness and compassion?’ … I did not have a very good answer. I said it was hard.”
Former Senator Jim Webb is the fifth Democrat to enter the race—and by far the most conservative one.
In a different era’s Democratic Party, Jim Webb might be a serious contender for the presidential nomination. He’s a war hero and former Navy secretary, but he has been an outspoken opponent of recent military interventions. He’s a former senator from Virginia, a purple state. He has a strong populist streak, could appeal to working-class white voters, and might even have crossover appeal from his days as a member of the Reagan administration.
In today’s leftward drifting Democratic Party, however, it’s hard to see Webb—who declared his candidacy Thursday—getting very far. As surprising as Bernie Sanders’s rise in the polls has been, he looks more like the Democratic base than Webb does. The Virginian is progressive on a few major issues, including the military and campaign spending, but he’s far to the center or even right on others: He's against affirmative action, supports gun rights, and is a defender of coal. During the George W. Bush administration, Democrats loved to have him as a foil to the White House. It’s hard to imagine the national electorate will cotton to him in the same way. Webb’s statement essentially saying he had no problem with the Confederate battle flag flying in places like the grounds of the South Carolina capitol may have been the final straw. (At 69, he’s also older than Hillary Clinton, whose age has been a topic of debate, though still younger than Bernie Sanders or Joe Biden.)
For centuries, experts have predicted that machines would make workers obsolete. That moment may finally be arriving. Could that be a good thing?
1. Youngstown, U.S.A.
The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977.
For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression.
The Fourth of July—a time we Americans set aside to celebrate our independence and mark the war we waged to achieve it, along with the battles that followed. There was the War of 1812, the War of 1833, the First Ohio-Virginia War, the Three States' War, the First Black Insurrection, the Great War, the Second Black Insurrection, the Atlantic War, the Florida Intervention.
Confused? These are actually conflicts invented for the novel The Disunited States of Americaby Harry Turtledove, a prolific (and sometimes-pseudonymous) author of alternate histories with a Ph.D. in Byzantine history. The book is set in the 2090s in an alternate United States that is far from united. In fact, the states, having failed to ratify a constitution following the American Revolution, are separate countries that oscillate between cooperating and warring with one another, as in Europe.
Highlights from seven days of reading about entertainment
British Cinemas Need to Do Better for Black Audiences
Simran Hans | Buzzfeed
“The myth that black people don’t go to the cinema becomes a self-fulfilling prophecy, predicated on the assumption that cinemagoers are only interested in seeing themselves represented on screen. This seems to be at the heart of the problem.”
Hump Day: The Utterly OMG Magic Mike XXL
Wesley Morris | Grantland
“Not since the days of peak Travolta and Dirty Dancing has a film so perfectly nailed something essential about movie lust: Male vulnerability is hot, particularly when the man is dancing with and therefore for a woman. It aligns the entire audience with the complex prerogatives of female desire.”