CHICAGO---As President Obama was meeting French President Nicolas Sarkozy at the richly elegant Palais Rohan in Strasbourg, a Stanford University political scientist confided to colleagues in a dimly-lit hotel ballroom that he still doesn't understand why polls botched Obama's defeat in the New Hampshire primary 15 months earlier.
"They were so badly off," said Doug Rivers, who also runs the research firm YouGov/Polimetrix and who consulted for both CBS News and the Economist magazine during the 2008 campaign.
And, as weird as those New Hampshire numbers were, he reminded his tired but interested audience , "The first exit polling numbers from Virginia" on election day "showed Obama with a 32-point lead."
At that point, a few guffaws broke the languor of the decidedly non palatial room. After all, such comments can be what passes for humor when the nation's political scientists, and hundreds of intense and job-seeking doctoral candidates, convene at the annual Midwest Political Science Association brainfest at the dowager Palmer House. This particular panel was about "Data on the 2008 Election."
Indeed, the hundreds of individual sessions, not to mention the many more academic papers unveiled, served as a reminder that in the cable-fueled world of instant political analysis, there are folks---and many very bright folks---who take months, maybe years, figuring out what many of us might have errantly assumed was already well-known.
So perhaps you weren't interested in panels on "Religious Institutions and Political Intermediaries," "Media and Politics in Southeast Asia," "Trust, Euphoria and Punishment," "Policy Start-Ups: Diffusion and Entrepreneurship," or "The Role of Gender in Post-Communist Institutions and Society."
Or you didn't want to go to bed with footnoted treatises on how the number of beer company workers living in a congressional district might impact a congressman's vote on hops-and-barley-related legislation; on statistical common denominators among 75 revolutions and rebellions; on patterns of impeachment proceedings in new democracies; on the politics of land reform in Thailand; on whether female incumbents are more likely to be challenged for re-election than males; or on how Al-Jazeera's English online coverage of the Middle East differs from CNN's or the BBC's.
If you craved a post-mortem on the election, the seven pollsters and academics gathered that morning provided insight, cautionary notes and a window onto a rapidly-changing, possibly unduly influential, enterprise of discerning what Americans are thinking at any given moment.
An obvious new reality is competition. Once upon a time, in those ancient days of the 1930s and 1940s, there were two well-known commercial pollsters, Gallup and Roper. They relied on what the trade knew as "quota sampling," wherein one picked a panel of half males, half females, and asked them questions. That all seemed to work until the late 1940s. By then a University of Michigan group headed by Angus Campbell, was using an "area probability" sample, by which one chose a set of geographic units at random. Within each geographic unit, you picked a random block. Within each block, you targeted a random dwelling unit. And within a dwelling unit, you found a random individual. This approach was praised and further legitimized by its nailing the 1948 presidential election.
From that point, the subsequent big election study, funded by the Carnegie Corporation and other private foundations throughout the 1950s and 1960s, was indeed Michigan's American National Election Studies (though for many years it was called the National Election Studies, or NES, a moniker the old fogies still use). In the 1970s, the studies began to be funded by the National Science Foundation, with the Principal Investigator being the University of Michigan's Warren Miller (later to move to Arizona State University), who died in 1999. The Michigan operation remained the core supplier of data to most everybody in the field. The most recent Principal Investigators were Arthur Lupia of Michigan and Jon Krosnick of Stanford as part of a Michigan-Stanford collaboration. A new set of PI's was recently announced: Michigan's Vince Hutchings and Stanford's Gary Segura and Simon Jackman.
The National Annenberg Election Survey, via the University of Pennsylvania, subsequently took on prominence, though Wall Street-inspired financial woes threaten the underlying endowment at the Annenberg School at Penn and, thus, the scope of future efforts. It was started in 2000 by Kathleen Hall Jamieson, and is a much larger study than the NES -- about 100,000 interviews in 2004 and in 2008 --conducted by phone (and even with a separate Internet-based survey in 2008) and is focused on campaigns and media.
NES, Annenberg and Gallup remain the academic pace setters. They're now finding dozens of competitors and new partnerships, with the 2008 campaign even including a polling combination of the Associated Press and upstart Yahoo! News. And our collective appetite for the latest numbers seems to grow as do the media outlets craving numbers and new "story lines" to fill time and space in the 24/7, Twitttering news world.
Meanwhile, dramatic shifts in the basic means of culling data---first from the traditional person-to-person interviews to phone interviews, now to the evolving use of online responses---have intensified arguments among the cognoscenti as to basic methodologies and their flaws. For example, are people really likely to be more honest when answering questions online than in talking to a real person, as many reflexively assume? Do pollsters have to pay local connectivity charges for respondents in need?
What's the right way to truly measure racial attitudes, wondered panel member Sunshine Hillygus, a researcher from Harvard's government department? Will people answer race-related questions differently online? She believes that the proliferation of competitive data "increases our responsibility as reviewers" of the data, especially with data being collected in new ways, notably online.