I know that Paul Krugman was not really serious when he linked this study naming him the most accurate prognosticator in America. Nonetheless, it's getting some play around the internet, and a warm reception from people who don't seem to know any better, so it's worth pointing out why this sort of thing is so dreadful. I mean, I'm sure it was a very fine senior project for the Hamilton College students who produced it, but the results tell us nothing at all about the state of prognostication in this country.
Krugman quotes this segment from the Hamilton College press release:
Now, a class at Hamilton College led by public policy professor P. Gary Wyckoff has analyzed the predictions of 26 prognosticators between September 2007 and December 2008. Their findings? Anyone can make as accurate a prediction as most of them if just by flipping a coin.
The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.
The top prognosticators - led by New York Times columnist Paul Krugman - scored above five points and were labeled "Good," while those scoring between zero and five were "Bad." Anyone scoring less than zero (which was possible because prognosticators lost points for inaccurate predictions) were put into "The Ugly" category. Syndicated columnist Cal Thomas came up short and scored the lowest of the 26.
I myself read Paul Krugman more often than Cal Thomas, so perhaps I should take this as evidence of my perspicacity . . . but no. This is nonsense. The study runs for a little over a year, between September 2007 and 2008. They didn't even look at all of the statements made by the prognosticators, but at a "representative sample", presumably because they couldn't handle the volume that would be required to analyze all of it. Some of the prognosticators made too few testable predictions to generate good results, and the riskiness of the prediction varied--someone who predicted that Obama was going to win the election in October 2008 seems to have gotten the same "score" for that flip as someone who predicted that Obama would do so in September 2007. The number of predictions varied between commentators, making comparison even more difficult.
Against this background, it makes no sense to say--as the students and the press release do--that this study shows that "a number of individuals in our sample, including Paul Krugman, Maureen Dowd, Ed Rendell, Chuck Schumer, Nancy Pelosi, and Kathleen Parker were better than a coin flip (sometimes, substantially so.)" One of the commonest fallacies you see among beginning students of probability is the belief that if a coin has a 50% chance of turning up heads, then anyone who flips a coin multiple times should end up getting half heads, and half tails.
This is not true--especially when you have a small number of "flips", as most of the prognosticators did. (It's not surprising that George Will, who made the greatest number of predictions, was statistically very close to zero.) Rather, if you get a bunch of people to flip coins a bunch of times, you'll get a distribution. Most of the results will cluster close to 50/50 (as was true in this case), but you'll get outliers.
This is often pointed out in the case of mutual fund managers, as John Bogle does using this graph:
And indeed, my finance profs taught me that the top mutual funds in a given year are not any more likely to show up as next year's top funds. Indeed, they may be less likely to do well the next year. Why? Because funds have strategies, which do better or worse depending on market conditions. The funds that do well in a given year are probably the funds that were especially well positioned to show outsized fluctuations in response to whatever changed that year--but that also means that they're especially likely to do lose money when those conditions change. Because the fluctuations are a random walk, they do not vindicate the fund manager's strategy or perspicacity--but they may seem to, temporarily.
Which may cast some light on why liberal pundits did especially well in this test. If you were the sort of person who is systematically biased towards predicting a bad end for Republicans, and a rosy future for Democrats, then election year 2008 was going to make you look like a genius. If you were the sort of person who takes a generally dim view of anything Democrats get up to, then your pessimism was probably going to hit more often than it missed.
It would be interesting to go back and look at the same group in the year running up to 2010. But even then, it would tell us very little. To do any sort of a true test, we'd have to get a bunch of these prognosticators to all make predictions about the same binary events, over a lengthy period of time, and then see how they fared over a multi-year period. I suspect that they'd end up looking a lot like mutual fund managers: little variation that could be distinguished from random variance.
Once you take into account their fees, mutual fund managers, as a group, underperform the market. And I suspect you'd see the same thing with pundits: as a group, they'd slightly underperform a random coin flip. People like Lindsay Graham cannot go on Meet the Press and say "Yup, we're going to lose on November 2nd" even when it is completely obvious that this is what will happen; they need to present an optimistic bias for their base. Over time, that optimistic bias about no-hope causes will cause a slight negative drag on the predictive power of their statements.
Does that undermine the credibility of pundits? I don't think that predictions are the fundamental purpose of punditry (though I do encourage people to make them as a way of raising the stakes on the truth claims they make, and in order to give us a benchmark against which to analyze our reasoning). Pundits offer predictions, yes, but more importantly, they offer you facts, context, and analysis. Their really important work is to help you make your own wrong predictions about the world.
Passengers on a domestic flight deplaning in New York were asked to present ID by Customs and Border Protection agents—a likely unenforceable demand that nevertheless diminishes freedom.
American citizens had their introduction to the Trump-era immigration machine Wednesday, when Customs and Border Protection agents met an airliner that had just landed at New York’s JFK airport after a flight from San Francisco. According to passenger accounts, a flight attendant announced that all passengers would have to show their “documents” as they deplaned, and they did. The reason for the search, Homeland Security officials said, was to assist Immigration and Customs Enforcement in a search for a specific immigrant who had received a deportation order after multiple criminal convictions. The target was not on the flight.
After days of research, I can find no legal authority for ICE or CBP to require passengers to show identification on an entirely domestic fight. The ICE authorizing statute, 8 U.S.C. § 1357, provides that agents can conduct warrantless searches of “any person seeking admission to the United States”—if, that is, the officer has “reasonable cause to suspect” that the individual searched may be deportable. CBP’s statute, 19 U.S.C. § 1467, grants search authority “whenever a vessel from a foreign port or place or from a port or place in any Territory or possession of the United States arrives at a port or place in the United States.” CBP regulations, set out at 19 C.F.R. § 162.6, allow agents to search “persons, baggage, and merchandise arriving in the Customs territory of the United States from places outside thereof.”
The president has long toyed with the media, but the stakes are much higher now.
American presidents have often clashed with the press. But for a long time, the chief executive had little choice but to interact with journalists anyway.
This was as much a logistical matter as it was a begrudging commitment to the underpinnings of Democracy: News organizations were the nation’s watchdogs, yes, but also stewards of the complex editorial and technological infrastructure necessary to reach the rest of the people. They had the printing presses, then the steel-latticed radio towers, and, eventually, the satellite TV trucks. The internet changed everything. Now, when Donald Trump wants to say something to the masses, he types a few lines onto his pocket-sized computer-phone and broadcasts it to an audience of 26 million people (and bots) with the tap of a button.
When President Obama left, I stayed on at the National Security Council in order to serve my country. I lasted eight days.
In 2011, I was hired, straight out of college, to work at the White House and eventually the National Security Council. My job there was to promote and protect the best of what my country stands for. I am a hijab-wearing Muslim woman––I was the only hijabi in the West Wing––and the Obama administration always made me feel welcome and included.
Like most of my fellow American Muslims, I spent much of 2016 watching with consternation as Donald Trump vilified our community. Despite this––or because of it––I thought I should try to stay on the NSC staff during the Trump Administration, in order to give the new president and his aides a more nuanced view of Islam, and of America's Muslim citizens.
Long after research contradicts common medical practices, patients continue to demand them and physicians continue to deliver. The result is an epidemic of unnecessary and unhelpful treatments.
First, listen to the story with the happy ending: At 61, the executive was in excellent health. His blood pressure was a bit high, but everything else looked good, and he exercised regularly. Then he had a scare. He went for a brisk post-lunch walk on a cool winter day, and his chest began to hurt. Back inside his office, he sat down, and the pain disappeared as quickly as it had come.
That night, he thought more about it: middle-aged man, high blood pressure, stressful job, chest discomfort. The next day, he went to a local emergency department. Doctors determined that the man had not suffered a heart attack and that the electrical activity of his heart was completely normal. All signs suggested that the executive had stable angina—chest pain that occurs when the heart muscle is getting less blood-borne oxygen than it needs, often because an artery is partially blocked.
John Krakaeur, a neuroscientist at Johns Hopkins Hospital, has been asked to BRAIN Initiative meetings before, and describes it like “Maleficent being invited to Sleeping Beauty’s birthday.” That’s because he and four like-minded friends have become increasingly disenchanted by their colleagues’ obsession with their toys. And in a new paper that’s part philosophical treatise and part shot across the bow, they argue that this technological fetish is leading the field astray. “People think technology + big data + machine learning = science,” says Krakauer. “And it’s not.”
Two of the world’s three richest people extol the virtue, and relevance, of optimism in the age of Trump—and predict a comeback for fact-based discourse.
Bill Gates, the world’s richest man, and Warren Buffett, the third richest, are—not entirely coincidentally—two of the most unremittingly optimistic men on the planet. So when I met the two of them in New York recently to talk about the state of humankind, and about the future of American democracy, I had a clear understanding of my mission, which was to pressure-test their sanguinity at every turn.
I tried, and failed, though not completely. Both men appear to doubt some of President Trump’s innovations in rhetoric and policy. Both men have warm feelings about immigrants, and also about facts, and so are predisposed to react skeptically to recent developments in the capital. When I asked whether they believed America needed to be made great again, Buffett nearly jumped out of his chair: “We are great! We are great!” And when I asked about the Trump Administration’s problematic relationship with empiricism, Gates said, “I predict a comeback for the truth.” He went on to say, “To the degree that certain solutions are created not based on facts, I believe these won’t be as successful as those that are based on facts. Democracy is a self-correcting thing.”
Some data gathered from travelers going through customs can stay in a Homeland Security database for 75 years.
When you cross into or out of the United States, whether in a car or at an airport, you enter a special zone where federal agents have unusual powers to search your belongings—powers they don’t have elsewhere in the country. The high standard set by the Fourth Amendment, which protects people against unreasonable searches, is lowered, and the Fifth Amendment, which guards against self-incrimination and prevents the government from demanding computer passwords or smartphone PINs, is rendered less effective.
These special rules allowed a customs officer at the Houston airport to ask a NASA engineer to give up the passcode to his smartphone last month. The engineer, Sidd Bikkannavar, was reentering the U.S. after a two-week vacation in Chile, but the device he had on him belonged to his employer, NASA’s Jet Propulsion Laboratory. He routinely used the smartphone for sensitive work, so losing sight of it for a half hour was a “huge, huge violation of work policy,” Bikkannavar told me.
In an era when audiences are so sure about so much, the mistake—simple, dramatic, human—can be a wonderful thing.
Last year, the comedian Marc Maron brought the author Chuck Klosterman on as a guest on his WTF podcast. The two discussed many things (including Klosterman’s then-new book, But What If We’re Wrong?, which he was there to promote), but one of them was sports—and the particular thrill that they offer to audiences. Sporting events, Klosterman argued, promise that most dramatic of things: an unknown outcome. Unlike other widely watched events—the Super Bowl halftime show, the Grammys, the Oscars—the primary selling point of sporting events is that their endings are, by definition, unpredictable. Within them, anything can happen.
Well. While you can say a lot about the Oscars on Sunday, you can’t say that the glitzy awards show was boringly predictable. The 89th Annual Academy Awards ceremony, right at its conclusion, brought a mixture of confusion and shock and full, deep delight to its viewers as Warren Beatty and Faye Dunaway teamed up to announce the Best Picture winner and proceeded to, because of a backstage flub, announce the wrong movie. Chaos—and really, really good TV—ensued. Tired East Coasters were summoned back to their living rooms from their bedrooms, on the grounds that “ohmyGodyou’veGOTtoseethis.” Twitter erupted with jokes—about Bonnie and Clyde being at it again, about Schrödinger’s envelope, about “Dewey Defeats Truman” getting an Oscars-friendly update. It was late on a Sunday evening, and the unexpected had happened in the most unexpected of ways, and the whole thing was, as my colleague Adam Serwer perfectly summed it up, Moon-lit.
Did the prank with “Gary from Chicago” and his band of tourists humble Hollywood—or just condescend?
If the last-minute twist at the Oscars was seen to echo all the last-minute twists in American culture lately—the Super Bowl, the election—a silly five-minute segment earlier in the night should be noted for what it captured about the country’s ongoing tensions and tastes in iPhone peripherals.
Host Jimmy Kimmel’s team arranged for a sightseeing bus of supposedly “real” tourists to walk into the room, expecting a museum exhibit about the Oscars but instead finding themselves in the middle of the actual thing. “Welcome to the Dolby Theater,” Kimmel announced. “This is the home of the Academy Awards, which are, in fact, happening right now.”