National Journal

Attention, America: The 2016 horse race is on. Most everyone who's floated the idea of running for president already has formally declared.

And as the number of official candidates continues to grow, the more reporters will turn to pollsters and ask: Who's winning? With nearly 20 candidates potentially in the game, that answer is impossible to decipher.

"Everybody wants clarity," says Peter Brown, associate director of the Quinnipiac University Poll. "Polls often don't provide clarity."

So we wanted to know: How can we do better, on our end, to inform the public? To that end, we surveyed several pollsters and academics on the following question: What are the most common mistakes reporters make when writing about political polls?

(RELATED: Goodbye, Straw Poll — How the GOP Is Punishing Its Long-Shot Candidates)

Here's what we learned.

Remember: Pollsters aren't prophets

Polls are snapshots, meant to capture voter sentiment at a particular moment. These snapshots can be aggregated to chart trends, but only in retrospect.

"I think most commonly, reporters ask if I think the current poll numbers will hold," J. Ann Selzer, who conducts polls for Bloomberg and other media outlets, writes in an email. "How in the world could anyone know? The campaigns are going to spend hundreds of millions of dollars to change the poll numbers, so I would expect change, but in what direction?"

This early in a presidential cycle, it can be difficult to get clear results at all — especially in this cycle's crowded Republican field.

"We're currently in a completely unknown situation," Brown says. With so many contenders, there might be a chunk of "leaders," but they're statistically insignificant ones.

(RELATED: Tim Pawlenty on the Death of the Iowa Straw Poll: 'Good Riddance')

Because of the high volume of candidates, "you can't get any clarity because you can't get any separation," Brown says.

Tom Rosenstiel, a media critic and researcher at the American Press Institute, suggests that polls that focus on how people feel about the state of the country — their view of its direction, how the economy is doing, how voters feel about the political parties, whether they've voted before — are more meaningful at this point in the election cycle — when people might not have much knowledge about the individual candidates — than typical horse-race polls.

Mind your margins of error

The purpose of polling is to approximate the thoughts of a large population (i.e., likely voters in Iowa) with a much smaller sample (1,000 randomly selected Iowans). The margin of error is the pollster's acknowledgement that his or her sample may not perfectly represent the larger population.

The margin of error is a plus-or-minus figure. If a candidate is polling at 47 percent and the margin of error is 4 percentage points, it means that the real-world figure could be as low as 43 percent or as high as 51 percent.

If small changes in poll numbers fall within the margin of error, it's impossible to know if those changes are real or flukes. "Unless the change from one poll to another is outside the margin of error, it is not a change, and must not be interpreted as one," David Redlawsk, director of the Rutgers-Eagleton Poll, explains in an email. A change from 42 percent to 44 percent is not a change if the margin of error is 3 percentage points. It's only safe to say a particular candidate is in the lead if that lead is twice as large as the margin of error, Redlawsk says.

(RELATED: It's Way Too Early to Bet on 2016 Polls. Just Look at the Past Few Presidential Races.)

Scott Keeter, director of survey research at the Pew Research Center, attributes margin-of-error reporting mistakes to a "sense of false precision." (Keep in mind pollsters only claim 95 percent confidence that the real-world numbers fall within the margin of error.) And Keeter adds another caveat to interpreting tight leads: "Unless the sample size is enormous, [a difference of a couple points] is not statistically significant."

Transparency, transparency, transparency

Be skeptical if a polling organization makes it difficult to find the following: the complete data set; the exact wording of the questions; the margin of error; and who paid for the poll, if it was paid for, Rosenstiel said.

The more transparency, the more context a reporter gets — and the better understanding a reporter has, says Mollyann Brodie, president of the American Association for Public Opinion Research. Her organization has started a transparency initiative to encourage openness among polling groups.

The order in which questions appear on polls matters, too, Brodie says. Reporters can't single out the results of a particular question while ignoring the larger context in which that query was posed. If a pollster has asked multiple questions about the Obama administration before moving on to others, poll participants are already "primed" to be thinking about how they feel about the current state of the country and the administration. And that can affect how they answer any additional questions — including the one that a reporter might want to focus on. If participants feel negatively about Obama, they may respond negatively to the next question about the economy.

Avoid cross-contamination

In June, Gravis Insights released a poll of likely New Hampshire Republican primary voters. The poll found that Jeb Bush was in the lead: 21 percent of those surveyed said they'd pick him for the nomination. In May, a Bloomberg poll had found that Bush was the choice of 11 percent of New Hampshire GOP voters. That seems like a big jump.

Or is it?

Charles Franklin, director of the Marquette Law School Poll, says to be wary about comparing two numbers from different polling firms. To make a sound comparison between two polling numbers, you want to hold as many variables constant as possible. Citing two different polls in one comparison introduces a whole mess of variables.

"Some pollsters are using live interviewers on both landline and cellphone calls, others are using automated calls only to landlines, and some are using automated calls plus Internet surveys — and then still others are doing everything on the Internet," Franklin says.

Similarly, it's important to make sure the same question is being asked in each poll to ensure a clean comparison. If a poll is asking about job performance, compare it to other job-performance polls.

Look for consensus

A strong trend from one polling firm that uses consistent methods over time is good. But consensus among multiple polling firms is better.

"We should always be less certain about the results of a single poll than the average of a number [of polls]," writes Leonie Huddy, director of the Center for Survey Research at Stony Brook University, in an email.

Poll aggregation sites such as Real Clear Politics and HuffPost Pollster combine results from many polling firms over time, and can sketch out the big trends more definitively than a single poll can. (Nate Silver's FiveThirtyEight made his fame with these "poll of polls.")

Watch out for the cellphones

It goes without saying that not all polls are created equal, and that reporters, editors, and the public must be wary of their variations. Differences can be found in the size of the sample, the phrasing of questions, nonpartisan versus partisan polls, or those conducted via live caller, robopoll, or online.

Another major red flag? Pollsters excluding cell phone users in their sample. After all, not every voter has a landline phone. Restrict the poll to landline users only, and you restrict your sample.

"That's a big difference between pollsters these days," Brown said. "If you don't call cell phones, it's hard to argue that you're getting a random view of the electorate." And randomness is key to the quality of a poll.

What Americans don't know may be more important than what they do

A recent Monmouth University poll (June 11 to June 14) found that 46 percent of Republicans hadn't heard enough about Scott Walker to form a favorable or unfavorable opinion about him. According to that same poll, 10 percent of Republicans say they'd choose Walker to win the primary. What's more likely to be reported: the 10 percent figure or the 46 percent figure?

At this early stage, the "don't know" responses might be more meaningful, because they say how much potential a candidate has to improve.

"For a well-known candidate, their upside potential may be very limited, because people have already figured out what they think about them," Keeter said, referring to their potential to gain in favorability. "Whereas for somebody who has very low name recognition, they may have a lot of upside potential if people were to learn more about them."

Take Barack Obama in 2007. He started showing up on pollsters' radar as early as February of that year, and his upside potential was "considerable." Hillary Clinton was in a different position: "Her upside potential "¦ was not as great at that point because people kind of knew whether they liked her or not."

Don't reach

It's tempting to look at a dip in a candidate's numbers and want to find a corresponding event. Some political writers build a whole career on doing just that. But polls aren't about finding causes. They are about finding the effects.

"Usually what I say to reporters when they ask me, 'Why did [a change in the polls] happen?' I say: 'I'm pretty confident in what happened, but knowing why it happened is much harder,'" Franklin says. He says it's better for pundits to take some humility in their assessment, to say, "Here's what happened, but exactly why, we are not sure of."

"That's a very hard thing to do in a story or to say as a source, to confess that you really don't know," he says.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.