# Is Ohio a 'Toss-Up'?

The *Columbus Dispatch*, the only newspaper in Ohio's biggest city, has declared Ohio a "toss-up" between President Obama and Mitt Romney. This will please the Romney camp, which has been fighting hard against the "Ohio is Obama's rock-solid firewall" narrative.

For reasons I'll explain, this headline is misleading. And it's tempting to think it's intentionally misleading. After all, the *Dispatch* endorsed Romney (and in fact hasn't endorsed a Democrat for president since 1916). And the lead paragraph of the story under the "toss-up" headline does have a certain Mitt Romney-pep-rally quality to it: "The 'Ohio firewall' precariously stands for President Barack Obama, but a strong Republican turnout could enable Mitt Romney to tear it down on Election Day."

Still, I think we can give the *Dispatch* the benefit of the doubt and assume that its own firewall -- the church-state separation that is supposed to keep a newspaper's editorial stance from coloring its reporting -- is intact. Because there's a simpler explanation for the "toss-up" headline: It rests on the same slightly-too-simple way of thinking about polling that lots of reporters and other Americans evince every four years.

Presumably the reason the headline writer felt justified in calling the race a toss-up was this paragraph in the story: "The final Dispatch poll shows Obama leading 50 percent to 48 percent in the Buckeye State. However, that 2-point edge is within the survey's margin of sampling error, plus or minus 2.2 percentage points."

That wording suggests that Obama's two-point edge has no meaning. And that's a common way for journalists to interpret results that fall within the "margin of error." For example, in September a conservative columnist in the *New York Post* asserted that Obama's lead in state polls didn't matter because the "polls separating the two candidates are within the margin of error -- meaning that there is no statistical difference in support between Obama and Romney."

Explaining why it's wrong to say there's "no statistical difference" in such cases will take a couple of paragraphs, so please bear with me (unless you recently took a stats course, in which case you can skip these paragraphs if not more).

Here's what pollsters never tell you, except maybe in the fine print: When they say there's a margin of error of X, they don't mean there's no chance whatsoever that the poll is off by more than X. Typically, their margin-of-error calculations are based on a confidence level of 95 percent, which means the chances of the poll being off by more than X are 5 percent.

In the *Columbus Dispatch* poll, X was 2.2. So if the poll had found Obama was ahead by 2.21 points, the finding would have been "outside the margin of error" and thus treated with great respect -- but the fact is that there would still have been a 5 percent chance that, in the actual voting population, Romney was ahead. (I'm assuming the *Dispatch* followed convention and used the 95 percent confidence threshold in calculating its margin of error -- see postscript below.) The flip side of this coin is that the Obama lead of 2 points in the poll, though less than the margin of error of 2.2 points, is by no means devoid of significance. If you did the math -- which, many years after taking quantitative analysis in college, I lack the brain cells to do precisely -- you'd find that there's a probability of somewhere in the neighborhood of 90 percent that Obama is ahead in the voting population as a whole.

Now, we generally reserve the term "toss-up" for odds in the vicinity of 50-50 -- since the term, after all, refers to a coin toss. So when the odds are more like 90-10, it's a bit misleading for the *Dispatch* to use the term. I mean, you *could *define "toss-up" that loosely, but then you could also define it loosely enough to include 95-5 odds or 96-4 odds, in which case even an Obama lead that fell outside the "margin of error" as conventionally defined could be deemed a "toss-up."

My main point is just that the definition of "margin of error" is a bit arbitrary -- a line drawn somewhere on a continuum of probabilities. (In fact, some pollsters use a 90-percent confidence level in calculating the margin of error, though most now use the 95-percent standard.) Yet we treat the "margin of error" as some binary thing -- as if numbers that fall inside it are meaningless ("a statistical tie") and numbers that fall outside it deserve complete confidence. No polling result ever deserves complete confidence, because the only way to get the confidence level up to literally 100 percent is to sample the entire voting population. We call that poll "election day."

I should add two asterisks, one of which will make Obama voters feel more confident of victory in Ohio, and one of which will make them feel less so. First the confidence booster:

So many polls are being taken in Ohio these days that you can aggregate them and considerably reduce your margin of error, since the size of the total sample population becomes much higher than the size of the Dispatch poll. And if you do that, you'll discover that, actually, the average such poll shows Obama ahead by more than the 2 points found in the *Dispatch* poll. So for Obama supporters it's a twofer: The size of Obama's lead grows, and the margin of error shrinks.

Let's actually run through the exercise. Here, from the *RealClearPolitics* polling aggregator, are the seven Ohio polls that, as of this morning, had been published in November:

The total sample size is 7,568. That means the margin of error (as conventionally calculated, with a 95-percent confidence level) is somewhere around 1 point. If you average the seven numbers together, you get an Obama lead of 2.9 points. But that's a crude average. If we're going to be precise we should weight each poll according to its sample size, which gives us an average of 3.1 points.

So, collectively, polls conducted in Ohio recently have a (95 percent confidence level) margin of error of somewhere around 1 point and give Obama a lead of 3.1 points. That's no toss-up, even according to the loose definition of toss-up used by journalists who report on opinion polls. In fact, with a lead this big and a sample size this big, our confidence that Obama is ahead in the Ohio voting population at large is around 99 percent. Feel better, Obama supporters?

Not so fast! Here's what should shake your confidence in these polls:

All the statistical reasoning described above is premised on the assumption that the sample population is a random sample to begin with. Now, it's true that it's possible, in principle, to get a truly random sample. For example: If you have a jar full of red and blue jelly beans, all mixed up, and you're taking jelly beans out of the jar while blindfolded, you get a truly random sample. The larger your sample, the more likely it is to reflect the ratio of red to blue jelly beans in the jar as a whole -- and the exact likelihood can be accurately determined by the math alluded to above. In a situation like this, the probabilistic margins of error that pollsters calculate will be entirely reliable.

But pollsters don't pick jelly beans out of a jar. They call jelly beans on the phone (or, try to reach them online, or, as in the case of the *Dispatch* poll, solicit responses by mail). And for all they know, blue jelly beans are less likely to be near a phone than red jelly beans -- or less likely to answer the phone, or less likely to agree to be polled after they answer the phone, or whatever.

This is a very challenging problem, in part because the technological landscape is changing so fast that it's hard for pollsters to use their experience from the last presidential election as a basis for refining their methodology. Among the things that presumably have changed since 2008: the number of people who have cell phones, the number who have abandoned land lines in favor of cell phones, the number who have caller ID and use it, the number who ignore calls from unknown parties, etc. And these kinds of things tend to vary by age, income level, ethnicity, etc. -- all of which correlate with which candidate a person will vote for. Pollsters can do things to try to correct for all of this, but the ground is shifting so fast that it's hard for them to know they're doing the right things.

So I'm sympathetic to people who say the polls can't be trusted. But if the polls can't be trusted, then the logic behind that *Columbus Dispatch* headline -- and the story underneath it, and the poll the story is based on -- starts to fall apart. The premise of doing a poll and writing about it, and putting it under a banner headline on your front page, is that polls in some meaningful sense *can *be trusted. And if polls can be trusted, there's no basis for calling Ohio a "toss-up" in the sense that we normally use that term.

[*Postscript*: I've assumed that the *Dispatch* pollsters, in calculating their 2.2-point margin of error, are following the most common convention and using a 95-percent confidence level. But the poll's fine print doesn't say. And my sense, from nosing around online, is that 2.2 is actually a bit low for a 95-percent margin of error when your sample size is 1,500 -- though, on the other hand, it sounds slightly high for a 90-percent margin of error. But, as I said, my math is a bit rusty, so I'll hope some whiz-kid commenter clarifies the situation for us. Anyway, the basic morals of this story hold regardless of how exactly the *Dispatch* is defining its margin of error. And, in any event, I'm pretty sure I'm on solid ground in saying that Obama's 2-point lead, given a sample size of 1,500, implies a probability of roughly 90 percent -- give or take a few points--that he is ahead of Romney in the voting population at large (or would be, if the sample were truly random).]