When the session ended, I felt compelled to undertake my well-honed (if
not patented) "cocktail party" mode of interrogation. Especially with
some academics, I find it useful to force them to answer questions as
if posed by some unknowing soul at a cocktail partly. Please, no
polysyllables or talk of "paradigms."
So Ms. Hillygus, director of the Harvard government department's
Program on Survey, what don't you still understand about the 2008
election? What answers do you crave, given the fact that there's so
much data, especially from NES and Annenberg, still to be analyzed
before their formal release in coming months? Any
Obama-McCain-Clinton-Edwards-Romney post-mortem really seems to be more
of an ongoing autopsy, according to the professionals.
"Why did people vote the way they did? Why did they change from Hillary to Obama," she said.
"It's not a big surprise that Obama won," said Hillygus, citing the
lousy economy, the Iraq war and disdain for the Bush administration.
"But what was the election really about? Why did people vote in ways we
Hillygus assumed that when it came to whether a disappointed Clinton or
Edwards supporter ultimately backed Obama or Sen. John McCain, the
reasons were to be found in the economy or, perhaps, an individual's
racial attitudes. But she's now finding, post-election, that the reason
may be the Iraq war; that your view of the war may have been the
biggest predictor of whom you switched to. "That surprises me."
When it comes to tussles over methodology, she fears matters getting
worse. There's far more information, for sure, but is it getting any
better? Pollster.com, 538.com, RealClearPolitics.com and others found
audiences for their daily meshing of campaign polls. As she spoke, I
thought back to starting my campaign mornings at our kitchen laptop,
going to various websites precisely to get their averages of the latest
polls for Ohio, Pennsylvania, North Carolina, wherever.
But, Hillygus noted, those sites are combining polls with wildly
different methodologies and of very different quality. That overarching
reality is surely not understood by those sites' fans, or occasional
television commentators, like myself, who gab about results thrown into
a website's dumbed-down Cuisinart.
"And those [combining of results] shape behavior, media reporting and
individual behavior," she said. "Polls claiming a close race will raise
"And the sad truth is that it's not just consumers, it's scholars,
too," who err, she said. "The sad truth is that some media outlets have
higher standards than some academic journals."
"The way we collect data has an impact on the results that we find--a
point often overlooked by scholars, journalists, and the public," she
elaborated in an e-mail. "Survey response rates are declining -- it's
harder to reach people and once you reach them they are less likely to
answer a question; New technologies have complicated survey
sampling--offering more ways to contact people but also creating new
ways for people to avoid being contacted (caller ID, etc). No survey
is perfect (including the U.S. census), but there is also considerable variation in survey quality and accuracy that can impact the knowledge claims."