This article is from the archive of our partner .

At 1 p.m. E.T. every day until Election Day three weeks from today, Gallup will release the results of its presidential tracking poll. For the next 21 days, the number from that poll will flash all over Twitter for about five minutes, work its way into various blog posts in the afternoon, and be cited to prove various points on cable television through the night and into the morning. And then, at 1 p.m. the next day, repeat. The Gallup tracking poll is treated, appropriately or not, like a heartbeat, like blood pressure, as an indicator of the health of the candidates.

Should it be? What does it mean? What can you learn from it? Consider this nurse training in reading campaign vital signs. Or, more to the point: consider this Polling 101.

What you will need for this exercise: an understanding of basic mathematics, access to the web, including the sites PollTracker and FiveThirtyEight. You do not need a pencil.

Let's begin.

What to look for in polling

As politicians increasingly treat voters like consumers, voters increasingly treat politics like sports. Since Americans watch a lot of sports, we're used to tracking the score. We love polling data because it feels like we're watching the Jumbotron -- one glance, we see who's winning. Cowboys 24, Giants 14. Romney 49, Obama 47.

But polls aren't scores. They're more like vital signs, like temperature. Polls measure a changing state, not a running tally.

Let's say I hand you a photograph of a football in mid-air, hovering over the 50 yard line, and ask where the football is going to land. From the photo alone, you can't tell -- you don't know which way it's moving or how fast. Poll numbers by themselves are that football: static, not revealing much. What is much more important is the trend -- how the football is moving, how fast. If a Gallup poll shows the president winning 44 percent of the vote to Romney's 47, those numbers take on much more significance if you know that Obama's number yesterday was 62 percent as opposed to it having been 38 percent.

Most tools that either aggregate data from a number of polls or run recurring polls over time transcribe the trend with a line. PollTracker displays a running average of polling that shows a red line and a blue line. Like so:

Not to get too mathematical, but what's interesting isn't the line. What's interesting is the derivative of the line, the way and rate at which the line changes. The steeper a goes line up or down, the more dramatically a candidate's poll numbers are changing for the worse or for the better. To go back to the football analogy, there's an angle it's falling (or rising) and a speed. More important, if sometimes harder to eyeball, is how the curve progresses over an extended time. This is the key information: not just how the football is shimmying in the air right now, but gauging where it is compared to the place from which it was thrown. Campaigns—like the apocryphal sharks that need to keep moving lest they die—need to keep moving, or they'll die.

Which sounds obvious. But remember that campaigns build increases of support into their campaign plans. For a candidate in a two-person race, the campaign needs to cobble together 50 percent of the votes cast, plus one extra vote to put it over the top. Campaigns poll and plot to figure out how to add up to that number: 50,000 from African-Americans over the age of 65; 22,000 from college-educated whites in Nebraska. As the campaign unveils ads, knocks on doors, and puts its plan into action, they expect to see growth in support. What a campaign wants is a trend. If a positive trend isn't showing up in the polls, that can spell bad news for even a leading campaign.

What polls to watch

Don't watch Gallup unless you're Ed Schultz. (Just to be explicit: that is meant as an insult to Ed Schultz.) Gallup's daily tracking poll is itself a running average of the previous week of polling. In other words, the company builds in sentiment over time. Gallup can be useful, but primarily as an indicator that something has gone dramatically wrong with a campaign. If a candidate does exceptionally poorly in tonight's debate, Gallup may slowly show that candidate's numbers start to slide this week as more and more of its sample is comprised of days after the debate. The temperature reading you get from Gallup is imprecise, particularly as Election Day gets closer and voters get more energized and engaged.

In a presidential race, there's some question of which polls are more important: polls of national attitudes toward candidates or individual state polls that might better predict the outcome of the electoral college. Nate Silver of FiveThirtyEight has been weighing which is a better predictor, most recently here. But as long as a poll is halfway legit, it's worth looking at.

How to know if a poll is legit

(The sound you hear is a can of worms being cracked open.)

The Washington Post has a really good overview of how polls work and how to determine their trustworthiness. It comes down to three factors: the sample, the questions, and the methodology.

Poll samples have been a popular topic recently. That Post article describes the goal of sampling well.

Stripped of statistics-speak, sampling the population is like testing the temperature of a bowl of soup—you don't need to eat the whole thing, just stir it up and taste a spoonful or two. Or like taking a blood sample—no need to drain the patient dry, a syringe-ful will do.

Polling companies use statistical analysis to ensure that the people they speak with comprise a representative sample of the populace. They seek out a mix of respondents by gender, party affiliation, age, ethnicity that looks like the expected turn-out on Election Day. That last point is a big one: predicting turn-out is never trivial. This is where the distinction between "likely voters" and "registered voters" arises in a polling sample. As Election Day nears, people who tell polling companies that they're only somewhat likely to vote are more likely to be excluded from the poll—after all, they themselves have said that they might not show up at the polls.

The total number of people contacted varies, but polls with a sample size of a thousand people are generally more accurate—meaning they have a lower margin of error—than ones of a few hundred people. (There are about 16 caveats that should be applied to that sentence, but we'll skip them for now.) If pollsters can't reach the desired number of people in a demographic, they might weight the results to account for it. For example, if they aim to contact 12 percent African-Americans but only reach six percent, they may choose to give greater statistical weight to those respondents.

The recent fad of "unskewing" polls relates to the sample size. If a pollster decides that a sample comprised of 40 percent Democrats, 37 percent Republicans, and 23 percent independents reflects the likely voting population, changing those percentages will likely yield a different result. Partisans (well, Republicans, for the most part) have reweighted final data to represent the voting population they expect (or hope) to see. There are a lot of reasons this is dumb, not the least of which is that it yields a clunky approximation from a refined set of data. But perhaps the most obvious reason that it's dumb is that polling companies live or die based on accuracy. They have a very good incentive to predict the results of a race accurately: it's what clients generally look for when hiring a firm. There are a few caveats that could be applied here, as well—one being that not all polls are meant to gauge election results, but rather opinion. Such polls are more likely to influence results through question choice.

Which is why questions should be considered when assessing a pollster. Let's say you were asked one of the two following sets of questions.

Set 1:

  • Do you feel that the country has recovered from George Bush's recession?
  • Do you support Barack Obama, Mitt Romney, or another candidate?

Set 2:

  • Do you support Barack Obama, Mitt Romney, or another candidate?
  • Would you consider yourself better off now than you were four years ago?

Both sets include questions that are a bit leading. In the first case, the Bush question might lead a respondent to be more sympathetic to the plight of the president. In the second, the "better off" question echoes a theme from the Romney campaign. Both questions are a little iffy to ask. But the order also matters. In the first case, the Bush question could influence the follow-up "horse race" question; respondents will have fresh in their minds the idea that Bush deserves blame for the economy. This is referred to as "priming," skewing the results you get by establishing a mind frame among respondents. It is probably unnecessary to say that this is something that reputable pollsters try to avoid.

The final thing to consider when evaluating the trustworthiness of polls is methodology. Polls that use live calls, when a real person asks the respondent questions, are generally more accurate and more expensive than automated ones, when a robotic voice prompts respondents to push a button on their dial pad. (For one thing, it's easier for a live person to verify that he is speaking with the targeted voter.) FiveThirtyEight has also looked at how the type of phone used by the respondent has affected poll results. Surveys that include cell phone users have skewed toward Obama.

Even if a pollster uses all of the proper systems for its polling, that doesn't necessarily mean its polls are more accurate. It just means that they're more likely to be accurate. There's still a lot of statistical magic that happens in the black box that takes voter phone calls and turns them into spreadsheets. If that statistical analysis is flawed or based on erroneous assumptions, the results will be inaccurate. If you're interested, here's Nate Silver's extensive analysis of the accuracy of polling firms in 2010.

How to dive deep into the data

One of the best things about Public Policy Polling polls is that they include basic cross-tabulations of the data. Here, for example, is a PDF of their Pennsylvania results from yesterday. If you scroll down, you can see that they do more than just provide the question asked and the response by party and gender. The company also shows a tabular breakdown of question by demographic. Curious how people under 35 answered question four? Look it up. The deeper into the data you can dive, the more interesting things you can learn—as long as the sample size is still statistically meaningful.

If you ever have occasion to commission a poll, pollsters provide complete cross-tabs—breakdowns by question and demographic, a breakdown of demographics by demographic (how many people aged 25-35 were men?), and even questions by demographic by demographic (how did Democrats who graduated from college answer question six?). It's in this data that the most can be learned -- and from which the most bad assumptions can be drawn.

BONUS: What to watch on election night

On election night, our scorekeeping tendencies go into overdrive. We watch every new release of polling results in every state like salivating dogs. If Obama has 65 percent of the vote to Romney's 30 percent, that means something very different if 2 percent of precincts are reporting than if 98 percent are. Until about half of precincts are reporting, you're probably not going to learn much.

Here, again, the thing to watch is the trend. If Romney is up by ten points in a state, and each time a new chunk of precincts report it seems Obama gains one point, Obama could catch up -- depending on how much more of the state there is still to report. While certain regions of states (and cities) tend to favor one candidate over the other, returns are generally distributed randomly. Seeing how the trend develops, then, is your best bet for predicting where the results will end up, unless you happen to be familiar with the vagaries of that state's urban and suburban voting patterns. In which case, you know -- rely on that.

Other reading

The Times asked a bunch of experts if we should abolish tracking polls. The ones who said we should are wrong and should be ignored.

You could also have skipped most of this post and just read Nate Silver's what-to-watch essay. Maybe we should have mentioned that up top.

So ends Polling 101. Grades will be posted outside my office tomorrow at 4pm. Right now you have a 62 percent chance of passing, but the trend isn't good.

Photo of the grave marker of George H. Gallup, founder of the Gallup Poll, in the Princeton Cemetery by Tony Fischer via his Flickr.

This article is from the archive of our partner The Wire.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.