"They were so badly off," said Doug Rivers, who also runs the research firm YouGov/Polimetrix and who consulted for both CBS News and the Economist magazine during the 2008 campaign.
And, as weird as those New Hampshire numbers were, he reminded his tired but interested audience , "The first exit polling numbers from Virginia" on election day "showed Obama with a 32-point lead."
At that point, a few guffaws broke the languor of the decidedly non palatial room. After all, such comments can be what passes for humor when the nation's political scientists, and hundreds of intense and job-seeking doctoral candidates, convene at the annual Midwest Political Science Association brainfest at the dowager Palmer House. This particular panel was about "Data on the 2008 Election."
Indeed, the hundreds of individual sessions, not to mention the many more academic papers unveiled, served as a reminder that in the cable-fueled world of instant political analysis, there are folks---and many very bright folks---who take months, maybe years, figuring out what many of us might have errantly assumed was already well-known.
So perhaps you weren't interested in panels on "Religious Institutions and Political Intermediaries," "Media and Politics in Southeast Asia," "Trust, Euphoria and Punishment," "Policy Start-Ups: Diffusion and Entrepreneurship," or "The Role of Gender in Post-Communist Institutions and Society."
Or you didn't want to go to bed with footnoted treatises on how the number of beer company workers living in a congressional district might impact a congressman's vote on hops-and-barley-related legislation; on statistical common denominators among 75 revolutions and rebellions; on patterns of impeachment proceedings in new democracies; on the politics of land reform in Thailand; on whether female incumbents are more likely to be challenged for re-election than males; or on how Al-Jazeera's English online coverage of the Middle East differs from CNN's or the BBC's.
If you craved a post-mortem on the election, the seven pollsters and academics gathered that morning provided insight, cautionary notes and a window onto a rapidly-changing, possibly unduly influential, enterprise of discerning what Americans are thinking at any given moment.
An obvious new reality is competition. Once upon a time, in those ancient days of the 1930s and 1940s, there were two well-known commercial pollsters, Gallup and Roper. They relied on what the trade knew as "quota sampling," wherein one picked a panel of half males, half females, and asked them questions. That all seemed to work until the late 1940s. By then a University of Michigan group headed by Angus Campbell, was using an "area probability" sample, by which one chose a set of geographic units at random. Within each geographic unit, you picked a random block. Within each block, you targeted a random dwelling unit. And within a dwelling unit, you found a random individual. This approach was praised and further legitimized by its nailing the 1948 presidential election.
From that point, the subsequent big election study, funded by the Carnegie Corporation and other private foundations throughout the 1950s and 1960s, was indeed Michigan's American National Election Studies (though for many years it was called the National Election Studies, or NES, a moniker the old fogies still use). In the 1970s, the studies began to be funded by the National Science Foundation, with the Principal Investigator being the University of Michigan's Warren Miller (later to move to Arizona State University), who died in 1999. The Michigan operation remained the core supplier of data to most everybody in the field. The most recent Principal Investigators were Arthur Lupia of Michigan and Jon Krosnick of Stanford as part of a Michigan-Stanford collaboration. A new set of PI's was recently announced: Michigan's Vince Hutchings and Stanford's Gary Segura and Simon Jackman.
The National Annenberg Election Survey, via the University of Pennsylvania, subsequently took on prominence, though Wall Street-inspired financial woes threaten the underlying endowment at the Annenberg School at Penn and, thus, the scope of future efforts. It was started in 2000 by Kathleen Hall Jamieson, and is a much larger study than the NES -- about 100,000 interviews in 2004 and in 2008 --conducted by phone (and even with a separate Internet-based survey in 2008) and is focused on campaigns and media.
NES, Annenberg and Gallup remain the academic pace setters. They're now finding dozens of competitors and new partnerships, with the 2008 campaign even including a polling combination of the Associated Press and upstart Yahoo! News. And our collective appetite for the latest numbers seems to grow as do the media outlets craving numbers and new "story lines" to fill time and space in the 24/7, Twitttering news world.
Meanwhile, dramatic shifts in the basic means of culling data---first from the traditional person-to-person interviews to phone interviews, now to the evolving use of online responses---have intensified arguments among the cognoscenti as to basic methodologies and their flaws. For example, are people really likely to be more honest when answering questions online than in talking to a real person, as many reflexively assume? Do pollsters have to pay local connectivity charges for respondents in need?
What's the right way to truly measure racial attitudes, wondered panel member Sunshine Hillygus, a researcher from Harvard's government department? Will people answer race-related questions differently online? She believes that the proliferation of competitive data "increases our responsibility as reviewers" of the data, especially with data being collected in new ways, notably online.
Even within the inside-baseball discussion, it was clear that there are significant problems with what and whom we should trust. And it goes far beyond what one panelist noted are "the real sample differences among Annenberg and NES" when it comes to presidential campaign polling.
"Right now the field doesn't have a good idea about the properties of the data," said Arthur Lupia, the University of Michigan political scientist and ex-co-head honcho of NES. "We need clearer understanding of the properties and methods which produce the data; and what are the types of [political] analysis for which data is most suited."
The experts concede the need to understand Americans and their behavior patterns far better. They need to be more sophisticated with how voters consume media and get their information, said Richard Johnston of the University of Pennsylvania and the Annenberg Survey. They need to know, said Lupia, more about differences between Cuban-Americans in South Florida and new Mexican-American immigrants in California before just lumping together their responses.
When the session ended, I felt compelled to undertake my well-honed (if not patented) "cocktail party" mode of interrogation. Especially with some academics, I find it useful to force them to answer questions as if posed by some unknowing soul at a cocktail partly. Please, no polysyllables or talk of "paradigms."
So Ms. Hillygus, director of the Harvard government department's Program on Survey, what don't you still understand about the 2008 election? What answers do you crave, given the fact that there's so much data, especially from NES and Annenberg, still to be analyzed before their formal release in coming months? Any Obama-McCain-Clinton-Edwards-Romney post-mortem really seems to be more of an ongoing autopsy, according to the professionals.
"Why did people vote the way they did? Why did they change from Hillary to Obama," she said.
"It's not a big surprise that Obama won," said Hillygus, citing the lousy economy, the Iraq war and disdain for the Bush administration. "But what was the election really about? Why did people vote in ways we predicted?"
Hillygus assumed that when it came to whether a disappointed Clinton or Edwards supporter ultimately backed Obama or Sen. John McCain, the reasons were to be found in the economy or, perhaps, an individual's racial attitudes. But she's now finding, post-election, that the reason may be the Iraq war; that your view of the war may have been the biggest predictor of whom you switched to. "That surprises me."
When it comes to tussles over methodology, she fears matters getting worse. There's far more information, for sure, but is it getting any better? Pollster.com, 538.com, RealClearPolitics.com and others found audiences for their daily meshing of campaign polls. As she spoke, I thought back to starting my campaign mornings at our kitchen laptop, going to various websites precisely to get their averages of the latest polls for Ohio, Pennsylvania, North Carolina, wherever.
But, Hillygus noted, those sites are combining polls with wildly different methodologies and of very different quality. That overarching reality is surely not understood by those sites' fans, or occasional television commentators, like myself, who gab about results thrown into a website's dumbed-down Cuisinart.
"And those [combining of results] shape behavior, media reporting and individual behavior," she said. "Polls claiming a close race will raise turnout."
"And the sad truth is that it's not just consumers, it's scholars, too," who err, she said. "The sad truth is that some media outlets have higher standards than some academic journals."
"The way we collect data has an impact on the results that we find--a point often overlooked by scholars, journalists, and the public," she elaborated in an e-mail. "Survey response rates are declining -- it's harder to reach people and once you reach them they are less likely to answer a question; New technologies have complicated survey sampling--offering more ways to contact people but also creating new ways for people to avoid being contacted (caller ID, etc). No survey is perfect (including the U.S. census), but there is also considerable variation in survey quality and accuracy that can impact the knowledge claims."
"ABC News has higher standards than some academic journals," she responded.
Mr. Rivers of Stanford, what don't you still get?
Well, he really does want to know what the heck happened with the Virginia exit polling on Election Day. That analysis, he said, is still going on. "We do not understand why it happened." Ditto, he said, with the New Hampshire primary polling which paved the way for what was then decreed as Clinton's tear-fueled upset victory over Obama, the red-hot victor in Iowa.
With that panel over, I ambled over to "Race, Ethnicity, and 2008 Presidential Election." For sure, it was a tough choice, given the 62 competing panels at that same hour, including "Strategic Interactions Between Cabinets and Legislatures in Parliamentary Democracies," "Latin American Social Movements in the Shadow of Neoliberalism," and "Locke and the Ancients." Future teachers, or frustrated current ones, were perhaps drawn to "Reinvigorating the Ubiquitous U.S. Government Course," with Staci Lynn Beavers of California State University San Marcos offering a paper on "Getting Political Science in on the Joke: Using TV's 'The Daily Show' to Teach Introductory U.S. Politics."
Did the election really suggest we've entered a "post-racial" America, debated professors from Cornell, Notre Dame, Rutgers, the University of North Texas, Duke and Stanford? Did Obama have an Asian-American problem, and why did Vietnamese-, not Korean-Americans, go for McCain? Why were young southern whites as likely to vote for the Republican as older southern whites, thus not exhibiting the ideological generation gap found among most other groups? What was the Michelle Obama Factor and, as one panelist suggested, would Barack have definitely lost hefty black support, and the election, if married to a white woman?
And what might be the impact on childhood socialization patterns of pre-K and kindergarten children growing up with a black president?
The odds are strong that the answers to all won't be ready for cable TV dissection any time soon.
But the political scientists will be back in Chicago April 22-25, 2010.
This article available online at: