I know that Paul Krugman was not really serious when he linked this study naming him the most accurate prognosticator in America. Nonetheless, it's getting some play around the internet, and a warm reception from people who don't seem to know any better, so it's worth pointing out why this sort of thing is so dreadful. I mean, I'm sure it was a very fine senior project for the Hamilton College students who produced it, but the results tell us nothing at all about the state of prognostication in this country.
Krugman quotes this segment from the Hamilton College press release:
Now, a class at Hamilton College led by public policy professor P. Gary Wyckoff has analyzed the predictions of 26 prognosticators between September 2007 and December 2008. Their findings? Anyone can make as accurate a prediction as most of them if just by flipping a coin.
The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.
The top prognosticators - led by New York Times columnist Paul Krugman - scored above five points and were labeled "Good," while those scoring between zero and five were "Bad." Anyone scoring less than zero (which was possible because prognosticators lost points for inaccurate predictions) were put into "The Ugly" category. Syndicated columnist Cal Thomas came up short and scored the lowest of the 26.
I myself read Paul Krugman more often than Cal Thomas, so perhaps I should take this as evidence of my perspicacity . . . but no. This is nonsense. The study runs for a little over a year, between September 2007 and 2008. They didn't even look at all of the statements made by the prognosticators, but at a "representative sample", presumably because they couldn't handle the volume that would be required to analyze all of it. Some of the prognosticators made too few testable predictions to generate good results, and the riskiness of the prediction varied--someone who predicted that Obama was going to win the election in October 2008 seems to have gotten the same "score" for that flip as someone who predicted that Obama would do so in September 2007. The number of predictions varied between commentators, making comparison even more difficult.
Against this background, it makes no sense to say--as the students and the press release do--that this study shows that "a number of individuals in our sample, including Paul Krugman, Maureen Dowd, Ed Rendell, Chuck Schumer, Nancy Pelosi, and Kathleen Parker were better than a coin flip (sometimes, substantially so.)" One of the commonest fallacies you see among beginning students of probability is the belief that if a coin has a 50% chance of turning up heads, then anyone who flips a coin multiple times should end up getting half heads, and half tails.
This is not true--especially when you have a small number of "flips", as most of the prognosticators did. (It's not surprising that George Will, who made the greatest number of predictions, was statistically very close to zero.) Rather, if you get a bunch of people to flip coins a bunch of times, you'll get a distribution. Most of the results will cluster close to 50/50 (as was true in this case), but you'll get outliers.
This is often pointed out in the case of mutual fund managers, as John Bogle does using this graph:
And indeed, my finance profs taught me that the top mutual funds in a given year are not any more likely to show up as next year's top funds. Indeed, they may be less likely to do well the next year. Why? Because funds have strategies, which do better or worse depending on market conditions. The funds that do well in a given year are probably the funds that were especially well positioned to show outsized fluctuations in response to whatever changed that year--but that also means that they're especially likely to do lose money when those conditions change. Because the fluctuations are a random walk, they do not vindicate the fund manager's strategy or perspicacity--but they may seem to, temporarily.
Which may cast some light on why liberal pundits did especially well in this test. If you were the sort of person who is systematically biased towards predicting a bad end for Republicans, and a rosy future for Democrats, then election year 2008 was going to make you look like a genius. If you were the sort of person who takes a generally dim view of anything Democrats get up to, then your pessimism was probably going to hit more often than it missed.
It would be interesting to go back and look at the same group in the year running up to 2010. But even then, it would tell us very little. To do any sort of a true test, we'd have to get a bunch of these prognosticators to all make predictions about the same binary events, over a lengthy period of time, and then see how they fared over a multi-year period. I suspect that they'd end up looking a lot like mutual fund managers: little variation that could be distinguished from random variance.
Once you take into account their fees, mutual fund managers, as a group, underperform the market. And I suspect you'd see the same thing with pundits: as a group, they'd slightly underperform a random coin flip. People like Lindsay Graham cannot go on Meet the Press and say "Yup, we're going to lose on November 2nd" even when it is completely obvious that this is what will happen; they need to present an optimistic bias for their base. Over time, that optimistic bias about no-hope causes will cause a slight negative drag on the predictive power of their statements.
Does that undermine the credibility of pundits? I don't think that predictions are the fundamental purpose of punditry (though I do encourage people to make them as a way of raising the stakes on the truth claims they make, and in order to give us a benchmark against which to analyze our reasoning). Pundits offer predictions, yes, but more importantly, they offer you facts, context, and analysis. Their really important work is to help you make your own wrong predictions about the world.
In the United States, when an unmarried man has a baby, his partner can give it up without his consent—unless he happens to know about an obscure system called the responsible father registry.
Christopher Emanuel first met his girlfriend in the fall of 2012, when they were both driving forklifts at a warehouse in Trenton, South Carolina. She was one of a handful of women on the job; she was white and he was black. She ignored him at first, and Emanuel saw it as a challenge. It took multiple attempts to get her phone number. He says he “wasn’t lonely, but everybody wants somebody. Nothing wrong with being friends.”
Emanuel, who is now 25, describes himself as a non-discriminatory flirt. He was popular in high school and a state track champion. According to the Aiken High School 2008 yearbook, he was voted “Most Attractive” and “Best Dressed.” Even his former English teacher Francesca Pataro describes him as a “ray of sunshine.” Emanuel says he’s “talked”—euphemistically speaking—with a lot of women: “Black, Puerto Rican, Egyptian, and Vietnamese.” But before he met this girlfriend, he says, he had never seriously dated a white girl.
“Here is what I would like for you to know: In America, it is traditional to destroy the black body—it is heritage.”
Last Sunday the host of a popular news show asked me what it meant to lose my body. The host was broadcasting from Washington, D.C., and I was seated in a remote studio on the Far West Side of Manhattan. A satellite closed the miles between us, but no machinery could close the gap between her world and the world for which I had been summoned to speak. When the host asked me about my body, her face faded from the screen, and was replaced by a scroll of words, written by me earlier that week.
As the Vermont senator gains momentum, Claire McCaskill rushes to the frontrunner’s defense.
Obscured by the recent avalanche of momentous news is this intriguing development from the campaign trail: The Hillary Clinton campaign now considers Bernie Sanders threatening enough to attack. Fresh off news that Sanders is now virtually tied with Hillary in New Hampshire, Claire McCaskill went on Morning Joe on June 25 to declare that “the media is giving Bernie a pass … they’re not giving the same scrutiny to Bernie that they’re certainly giving to Hillary.”
The irony here is thick. In 2006, McCaskill said on Meet the Press that while Bill Clinton was a great president, “I don’t want my daughter near him.” Upon hearing the news, according to John Heilemann and Mark Halperin’s book Game Change, Hillary exclaimed, “Fuck her,” and cancelled a fundraiser for the Missouri senator. McCaskill later apologized to Bill Clinton, and was wooed intensely by Hillary during the 2008 primaries. But she infuriated the Clintons again by endorsing Barack Obama. In their book HRC, Aimee Parnes and Jonathan Allen write that, “‘Hate’ is too weak a word to describe the feelings that Hillary’s core loyalists still have for McCaskill.”
Most adults can’t remember much of what happened to them before age 3 or so. What happens to the memories formed in those earliest years?
My first memory is of the day my brother was born: November 14, 1991. I can remember my father driving my grandparents and me over to the hospital in Highland Park, Illinois, that night to see my newborn brother. I can remember being taken to my mother’s hospital room, and going to gaze upon my only sibling in his bedside cot. But mostly, I remember what was on the television. It was the final two minutes of a Thomas the Tank Engine episode. I can even remember the precise story: “Percy Takes the Plunge,” which feels appropriate, given that I too was about to recklessly throw myself into the adventure of being a big brother.
In sentimental moments, I’m tempted to say my brother’s birth is my first memory because it was the first thing in my life worth remembering. There could be a sliver of truth to that: Research into the formation and retention of our earliest memories suggests that people’s memories often begin with significant personal events, and the birth of a sibling is a textbook example. But it was also good timing. Most people’s first memories date to when they were about 3.5 years old, and that was my age, almost to the day, when my brother was born.
The banking industry needs more than regulation. It needs a new culture.
The call came from another trader near midnight one night in ‘95. I assumed it was about a crisis in the financial markets, something bad happening in Asia. No, it was about a strip club. “Dude, turn on the TV news. Giuliani is raiding the Harmony Theater.”
The Harmony Theater was a two-level dive club in lower Manhattan, popular among Wall Streeters because it bent rules. It was a place where almost anything, including drugs and sex, could be bought in the open.
When I turned on the TV I saw a swarm of close to a hundred police, many in riot gear, escorting handcuffed strippers and sad-looking clients into waiting police vans. No traders, or at least none that my friends or I knew, were arrested that night.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
New data shows that students whose parents make less money pursue more “useful” subjects, such as math or physics.
In 1780, John Adams wrote a letter to his wife, Abigail, in which he laid out his plans for what his children and grandchildren would devote their lives to. Having himself taken the time to master “Politicks and War,” two revolutionary necessities, Adams hoped his children would go into disciplines that promoted nation-building, such as “mathematicks,” “navigation,” and “commerce.” His plan was that in turn, those practical subjects would give his children’s children room “to study painting, poetry, musick, architecture, statuary, tapestry, and porcelaine.”
Two-hundred and thirty-five years later, this progression—“from warriors to dilettantes,” in the words of the literary scholar Geoffrey Galt Harpham—plays out much as Adams hoped it would: Once financial concerns have been covered by their parents, children have more latitude to study less pragmatic things in school. Kim Weeden, a sociologist at Cornell, looked at National Center for Education Statistics data for me after I asked her about this phenomenon, and her analysis revealed that, yes, the amount of money a college student’s parents make does correlate with what that person studies. Kids from lower-income families tend toward “useful” majors, such as computer science, math, and physics. Those whose parents make more money flock to history, English, and performing arts.
In 1992, the neuroscientist Richard Davidson got a challenge from the Dalai Lama. By that point, he’d spent his career asking why people respond to, in his words, “life’s slings and arrows” in different ways. Why are some people more resilient than others in the face of tragedy? And is resilience something you can gain through practice?
The Dalai Lama had a different question for Davidson when he visited the Tibetan Buddhist spiritual leader at his residence in Dharamsala, India. “He said: ‘You’ve been using the tools of modern neuroscience to study depression, and anxiety, and fear. Why can’t you use those same tools to study kindness and compassion?’ … I did not have a very good answer. I said it was hard.”
The troubled relationship between modern Greeks and their neighbors to the north and west—sometime admirers, sometime lenders, sometime detractors—helps explain today’s crisis.
Greece is the cradle of European civilization, but is it even in Europe?
The answer might seem obvious: The country is on the European landmass, of course, and it’s part of the European Union—for now at least. But the question has been fraught, and the answers have been unstable, contingent, and hedged since before the advent of the modern Greek state in the 19th century. The ongoing negotiations that may ultimately decide Greece’s EU membership are just the latest attempt to find an answer.
If you look at old maps of Europe, the Balkan Peninsula is included, but it’s labeled as Turkey. That’s because there was no independent Greece from the time of the Roman conquest of the country until the 1820s; some parts of modern-day Greece were under Ottoman control into the 20th century. Greek-speaking Ottoman subjects anchored their identity to religion and the Orthodox church, rather than to national identity, and thought of themselves more as Roman than as Hellenic, Molly Greene, a professor of history and Hellenic studies at Princeton, told me. Their cultural center would have been not now-dusty Athens but cosmopolitan Constantinople.