It goes without saying that not all polls are created equal, and that reporters, editors, and the public must be wary of their variations. Differences can be found in the size of the sample, the phrasing of questions, nonpartisan versus partisan polls, or those conducted via live caller, robopoll, or online.
Another major red flag? Pollsters excluding cell phone users in their sample. After all, not every voter has a landline phone. Restrict the poll to landline users only, and you restrict your sample.
"That's a big difference between pollsters these days," Brown said. "If you don't call cell phones, it's hard to argue that you're getting a random view of the electorate." And randomness is key to the quality of a poll.
What Americans don't know may be more important than what they do
A recent Monmouth University poll (June 11 to June 14) found that 46 percent of Republicans hadn't heard enough about Scott Walker to form a favorable or unfavorable opinion about him. According to that same poll, 10 percent of Republicans say they'd choose Walker to win the primary. What's more likely to be reported: the 10 percent figure or the 46 percent figure?
At this early stage, the "don't know" responses might be more meaningful, because they say how much potential a candidate has to improve.
"For a well-known candidate, their upside potential may be very limited, because people have already figured out what they think about them," Keeter said, referring to their potential to gain in favorability. "Whereas for somebody who has very low name recognition, they may have a lot of upside potential if people were to learn more about them."
Take Barack Obama in 2007. He started showing up on pollsters' radar as early as February of that year, and his upside potential was "considerable." Hillary Clinton was in a different position: "Her upside potential "¦ was not as great at that point because people kind of knew whether they liked her or not."
It's tempting to look at a dip in a candidate's numbers and want to find a corresponding event. Some political writers build a whole career on doing just that. But polls aren't about finding causes. They are about finding the effects.
"Usually what I say to reporters when they ask me, 'Why did [a change in the polls] happen?' I say: 'I'm pretty confident in what happened, but knowing why it happened is much harder,'" Franklin says. He says it's better for pundits to take some humility in their assessment, to say, "Here's what happened, but exactly why, we are not sure of."
"That's a very hard thing to do in a story or to say as a source, to confess that you really don't know," he says.