Political Polling's Unfavorables Are on the Rise

Even as election junkies become more focused on opinion data, the quality of that data seems to be getting worse. And now, Gallup decides to skip the presidential primary altogether.

Kevin Lamarque / Reuters

Gallup is getting out of the horserace.

As Politico reported Wednesday, the elder statesman of the American polling business has decided not to poll on the presidential primary, and might not even poll on the general election. It’s the end of an era. “They’ve been doing this since the 1940s,” says Emory political scientist Alan Abramowitz. “That was one of their talking points, their record of accuracy in presidential elections. Now they’re abandoning the field.”

Gallup’s accuracy record took a beating in the 2012 election, which might help explain its reticence this time around. The organization conducted an extensive review to try to figure out what went wrong. But Gallup editor Frank Newport says that changing interest and the cost were the big factors. “We believe to put our time and money and brainpower into understanding the issues and priorities is where we can most have an impact,” he told Politico.

That points to a perverse dynamic in the polling world. There are many more polls than there were just a couple of decades ago, because new technological methods have lowered the barriers to entry for aspiring pollsters. But many of those polls aren’t very good—they’re unreliable or inaccurate, and even when they get it right, the methodologies they use to get there are questionable. Polls rely on internet surveys, or don’t poll cellphone users, or rely on small samples, or simply sample poorly. And even as the barriers to polling lower, the cost of conducting a good poll continues to rise. Fewer people have landlines, especially younger people and non-whites. That means you have to call them on cellphones, but federal law bars robocalls to cellphones, meaning a poll needs actual interviews by a live person.* Response rates have gotten lower, so each poll requires a huge number of calls.

The rise of data journalism and poll aggregators—from FiveThirtyEight to RealClearPolitics to HuffPost Pollster—has made polls a greater focus for political junkies and coverage than ever before. Political scientists also complain that the media breathlessly report junk polls without adequate scrutiny, and fail to distinguish between better and worse ones. If people can’t tell the difference between Gallup and Gravis, why waste time and money on it? Smaller pollsters are also sometimes accused of massaging their results or improperly weighting them, either to better match what their competitors are finding, or in service of an ideological goal.*

The result is that as polls become a more and more important element of political journalism, established and (generally) reliable pollsters like Gallup (or Pew, which hasn’t polled on the primary) are exiting the field and leaving an ever-less-reliable picture.

"The polling profession is in a huge transition, and it’s going to be a little rocky until we figure it out,” says Terry Madonna, a professor of political science at Franklin and Marshall and director of the college’s poll. For example, online polls are becoming more and more accurate measures of opinion, but many experts don’t think they’re refined enough to trust broadly yet.

The profession has seen some notable flops in recent years, of varying types and seriousness. Republican pollsters have suffered a large number of polling failures, from Mitt Romney to Eric Cantor, the House majority leader unexpectedly beaten in a GOP primary. Polls in the 2014 midterm election proved to have been far too rosy for Democrats. Survey of the U.K. general election this year also badly missed the Conservative rout that occurred.

That’s the paradox: Even as poll analysts and data journalists are exalted as the wave of the future, the data they depend on may be eroding the ground under them. “Electoral modelers have a nerdy little secret: We aren’t oracles,” Sean Trende wrote earlier this year. “Draw back the curtain, and you’ll see that we are only as good as the polls we rely on and the models we invent.”

But who cares about whether the primary polls are bad—as long as you’re not a polling analyst with a reputation to protect? After all, as every desperate campaign likes to point out, the only poll that matters is on election day. And as The Atlantic quoted George Gallup in 1972, “The record of polls in primary elections is so bad that the sophisticated poll watcher will pay little attention to them, or make allowance for large error if he does.”

In the short term, it may not be a big deal. Early primary polls are notoriously pointless. Abramowitz notes that coverage of long-shot presidential candidates—from Newt Gingrich in 2011 to Ben Carson today—helps inflate their poll numbers, and those numbers in turn create a virtuous cycle that extends their candidacies. Voter choices are often predicated on “electability,” a notion in part derived from polls.

The bigger risk might be if the best pollsters avoid getting involved in the general election. A pollster like Gallup, with the capacity to conduct many interviews over an extended period, can produce refined data of the sort that smaller pollsters can only imagine—offering granular looks at subgroups or regions. Even when faster, drive-by surveys give an accurate view of the state of the horserace, they often don’t do much to explain why it is that voters feel the way they do. That’s a service that Pew, Gallup, and a handful of other well-regarded polls have always provided, and the one that may be most at risk as the polling world gets shaken up.

“They have the resources where they could really dig down and look,” Abramowitz says. “It’s disappointing.”

* This post has been updated to clarify that the rules require a live interviewer, not that they take place in person, and that the accusation against some automated polls is that the improperly weight their results, not simply that they weight them.