I know that Paul Krugman was not really serious when he linked this study naming him the most accurate prognosticator in America. Nonetheless, it's getting some play around the internet, and a warm reception from people who don't seem to know any better, so it's worth pointing out why this sort of thing is so dreadful. I mean, I'm sure it was a very fine senior project for the Hamilton College students who produced it, but the results tell us nothing at all about the state of prognostication in this country.
Krugman quotes this segment from the Hamilton College press release:
Now, a class at Hamilton College led by public policy professor P. Gary Wyckoff has analyzed the predictions of 26 prognosticators between September 2007 and December 2008. Their findings? Anyone can make as accurate a prediction as most of them if just by flipping a coin.
The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.
The top prognosticators - led by New York Times columnist Paul Krugman - scored above five points and were labeled "Good," while those scoring between zero and five were "Bad." Anyone scoring less than zero (which was possible because prognosticators lost points for inaccurate predictions) were put into "The Ugly" category. Syndicated columnist Cal Thomas came up short and scored the lowest of the 26.
I myself read Paul Krugman more often than Cal Thomas, so perhaps I should take this as evidence of my perspicacity . . . but no. This is nonsense. The study runs for a little over a year, between September 2007 and 2008. They didn't even look at all of the statements made by the prognosticators, but at a "representative sample", presumably because they couldn't handle the volume that would be required to analyze all of it. Some of the prognosticators made too few testable predictions to generate good results, and the riskiness of the prediction varied--someone who predicted that Obama was going to win the election in October 2008 seems to have gotten the same "score" for that flip as someone who predicted that Obama would do so in September 2007. The number of predictions varied between commentators, making comparison even more difficult.
Against this background, it makes no sense to say--as the students and the press release do--that this study shows that "a number of individuals in our sample, including Paul Krugman, Maureen Dowd, Ed Rendell, Chuck Schumer, Nancy Pelosi, and Kathleen Parker were better than a coin flip (sometimes, substantially so.)" One of the commonest fallacies you see among beginning students of probability is the belief that if a coin has a 50% chance of turning up heads, then anyone who flips a coin multiple times should end up getting half heads, and half tails.
This is not true--especially when you have a small number of "flips", as most of the prognosticators did. (It's not surprising that George Will, who made the greatest number of predictions, was statistically very close to zero.) Rather, if you get a bunch of people to flip coins a bunch of times, you'll get a distribution. Most of the results will cluster close to 50/50 (as was true in this case), but you'll get outliers.
This is often pointed out in the case of mutual fund managers, as John Bogle does using this graph:
And indeed, my finance profs taught me that the top mutual funds in a given year are not any more likely to show up as next year's top funds. Indeed, they may be less likely to do well the next year. Why? Because funds have strategies, which do better or worse depending on market conditions. The funds that do well in a given year are probably the funds that were especially well positioned to show outsized fluctuations in response to whatever changed that year--but that also means that they're especially likely to do lose money when those conditions change. Because the fluctuations are a random walk, they do not vindicate the fund manager's strategy or perspicacity--but they may seem to, temporarily.
Which may cast some light on why liberal pundits did especially well in this test. If you were the sort of person who is systematically biased towards predicting a bad end for Republicans, and a rosy future for Democrats, then election year 2008 was going to make you look like a genius. If you were the sort of person who takes a generally dim view of anything Democrats get up to, then your pessimism was probably going to hit more often than it missed.
It would be interesting to go back and look at the same group in the year running up to 2010. But even then, it would tell us very little. To do any sort of a true test, we'd have to get a bunch of these prognosticators to all make predictions about the same binary events, over a lengthy period of time, and then see how they fared over a multi-year period. I suspect that they'd end up looking a lot like mutual fund managers: little variation that could be distinguished from random variance.
Once you take into account their fees, mutual fund managers, as a group, underperform the market. And I suspect you'd see the same thing with pundits: as a group, they'd slightly underperform a random coin flip. People like Lindsay Graham cannot go on Meet the Press and say "Yup, we're going to lose on November 2nd" even when it is completely obvious that this is what will happen; they need to present an optimistic bias for their base. Over time, that optimistic bias about no-hope causes will cause a slight negative drag on the predictive power of their statements.
Does that undermine the credibility of pundits? I don't think that predictions are the fundamental purpose of punditry (though I do encourage people to make them as a way of raising the stakes on the truth claims they make, and in order to give us a benchmark against which to analyze our reasoning). Pundits offer predictions, yes, but more importantly, they offer you facts, context, and analysis. Their really important work is to help you make your own wrong predictions about the world.
Though it wasn’t pretty, Minaj was really teaching a lesson in civility.
Nicki Minaj didn’t, in the end, say much to Miley Cyrus at all. If you only read the comments that lit up the Internet at last night’s MTV Video Music Awards, you might think she was kidding, or got cut off, when she “called out” the former Disney star who was hosting: “And now, back to this bitch that had a lot to say about me the other day in the press. Miley, what’s good?”
To summarize: When Minaj’s “Anaconda” won the award for Best Hip-Hop Video, she took to the stage in a slow shuffle, shook her booty with presenter Rebel Wilson, and then gave an acceptance speech in which she switched vocal personas as amusingly as she does in her best raps—street-preacher-like when telling women “don’t you be out here depending on these little snotty-nosed boys”; sweetness and light when thanking her fans and pastor. Then a wave of nausea seemed to come over her, and she turned her gaze toward Cyrus. To me, the look on her face, not the words that she said, was the news of the night:
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health.
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
Thicker ink, fewer smudges, and more strained hands: an Object Lesson
Recently, Bic launched acampaign to “save handwriting.” Named “Fight for Your Write,” it includes a pledge to “encourage the act of handwriting” in the pledge-taker’s home and community, and emphasizes putting more of the company’s ballpoints into classrooms.
As a teacher, I couldn’t help but wonder how anyone could think there’s a shortage. I find ballpoint pens all over the place: on classroom floors, behind desks. Dozens of castaways collect in cups on every teacher’s desk. They’re so ubiquitous that the word “ballpoint” is rarely used; they’re just “pens.” But despite its popularity, the ballpoint pen is relatively new in the history of handwriting, and its influence on popular handwriting is more complicated than the Bic campaign would imply.
After calling his intellectual opponents treasonous, and allegedly exaggerating his credentials, a controversial law professor resigns from the United States Military Academy.
On Monday, West Point law professor William C. Bradford resigned after The Guardianreported that he had allegedly inflated his academic credentials. Bradford made headlines last week, when the editors of the National Security Law Journaldenounced a controversial article by him in their own summer issue:
As the incoming Editorial Board, we want to address concerns regarding Mr. Bradford’s contention that some scholars in legal academia could be considered as constituting a fifth column in the war against terror; his interpretation is that those scholars could be targeted as unlawful combatants. The substance of Mr. Bradford’s article cannot fairly be considered apart from the egregious breach of professional decorum that it exhibits. We cannot “unpublish” it, of course, but we can and do acknowledge that the article was not presentable for publication when we published it, and that we therefore repudiate it with sincere apologies to our readers.
Accusations of terrorism are a window into how the Turkish government tries to intimidate reporters, but also how a media bad boy is maturing.
Under Recep Tayyip Erdogan’s presidency, Turkish journalists have increasingly been badgered, intimidated, threatened, and punished. Now, however, the Turkish government is going after two foreign journalists.
It’s not difficult to see why the Turkish government might not want journalists in the area. Kurdish fighters, some backed by the U.S., have been battling ISIS in Iraq for months. While Turkey opposes ISIS, it’s also terrified of emboldened Kurds pushing for an autonomous state in the region. For decades, Ankara has fought a protracted war against Kurdish guerrilla groups in southeastern Turkey. After long trying to avoid being drawn into the conflict against ISIS, Turkey, a U.S. ally, has begun to take action, but it’s fighting against both ISIS and the Kurds, a strange case where, for the Turkish government, the enemy of my enemy might still be my enemy.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
The neurologist leaves behind a body of work that reveals a lifetime of asking difficult questions with empathy.
Oliver Sacks always seemed propelled by joyful curiosity. The neurologist’s writing is infused with this quality—equal parts buoyancy and diligence, the exuberant asking of difficult questions.
More specifically, Sacks had a fascination with ways of seeing and hearing and thinking. Which is another way of exploring experiences of living. He focused on modes of perception that are delightful not only because they are subjective, but precisely because they are very often faulty.
To say Sacks had a gift for this method of exploration is an understatement. He was a master at connecting curiosity to observation, and observation to emotion. Sacks died on Sunday after receiving a terminal cancer diagnosis earlier this year. He was 82.
The use of a stick to hold a camera at a distance for a self-portrait is not a new phenomenon, but the popularity of the new breed of extendable selfie stick has exploded over the past two years.
The use of a stick to hold a camera at a distance for a self-portrait is not a new phenomenon, but the popularity of the new breed of extendable selfie stick has exploded over the past two years. Multiple companies are producing varied versions of the device, tailored mostly to smartphone users. These sometimes-unwieldy extenders have been labeled as nuisances by some, especially in crowded public spaces, and have been banned in many museums, stadiums, and theme parks. Collected here are recent images of selfie sticks in use around the world, from high in the sky above China to the shores of Greece and beyond.
Many educators are introducing meditation into the classroom as a means of improving kids’ attention and emotional regulation.
A five-minute walk from the rickety, raised track that carries the 5 train through the Bronx, the English teacher Argos Gonzalez balanced a rounded metal bowl on an outstretched palm. His class—a mix of black and Hispanic students in their late teens, most of whom live in one of the poorest districts in New York City—by now were used to the sight of this unusual object: a Tibetan meditation bell.
“Today we’re going to talk about mindfulness of emotion,” Gonzalez said with a hint of a Venezuelan accent. “You guys remember what mindfulness is?” Met with quiet stares, Gonzalez gestured to one of the posters pasted at the back of the classroom, where the students a few weeks earlier had brainstormed terms describing the meaning of “mindfulness.” There were some tentative mumblings: “being focused,” “being aware of our surroundings.”
The tennis player is arguably the era’s greatest athlete, but she has fewer endorsements than other less-successful players.
The U.S. Open begins today (August 31), and Serena Williams has a chance to make tennis history. A win would put her at 22 career Grand Slam titles, tying Steffi Graf for second most, behind only Margaret Court. Her astonishing ability prompts arguments that she’s the sport’s greatest female player of all time, and currently the most dominant U.S. athlete in any sex or sport. Katrina Adams, the president of the U.S. Tennis Association, recently posited that Williams is the greatest athlete ever—period.