Before we get started on this, I want to say that this a layman's attempt to discuss a really complicated issue. Folks who know this stuff are more than welcome to point out the flaws. Alright, let's go..
So, in between bouts of Warcraft, and a family trip to Dive Bar to watch the playoffs, I talked to some folks who were smarter than me in regards to polling, Prop 8 and black folks. The biggest takeaway was that journalists/bloggers/writers/pundits need to be a lot more careful when deploying polling data. We should be even more careful when deploying numbers about minority communities, if only because of the sample sizes. And we should be especially careful about drawing broad conclusions based on one exit poll.
There are many reasons to doubt that Prop 8 garnered 70 percent approval in the black community, and there are many more reasons to doubt that African-Americans were the ones who killed the gay marriage in California. The first thing to understand is the methodology at work. Exit Pollsters select a group of random precincts which reflect, as accurately as possible, the demographics of the state. Then they approach every nth voter as they leave the polls. Then they do some phone polling to account for absentee balloting. Then, as the returns come in, they weight the results to match the actual vote count, as well as the actual demographics of the polling area.
Almost every step introduces the possibility of error. For instance, census data is the only reliable means for understanding the racial makeup of a particular precinct. But how do you know that the racial makeup matches the makeup of eligible voters? Of registered voters? Thus it's possible to get an unrepresentative precinct.
"The precinct may or may not be representative," said Patrick Egan, NYU professor and author of the latest Prop 8 study. "A pure sample picks random people out out of hat. But an exit poll is like picking a selection of people and putting into different hats, and then trying to pick out of those hats."
Then there's actual execution. David Moore, former VP of Gallup, that refusal rates for exit polls can run as high as 50 percent. "When you have small sample sizes in the case of minorities, exit polls aren't very good predictors," said Moore. "There are so many people who refuse to participate, that you have a response rate problem--and then people who do respond are different than those who don't."
Polling, in general, is subject to some margin of error. But as sample size decreases, the chance of error increases. The CNN Exit Poll estimated that 10 percent of California's vote was African-American. Most professionals I talked to this weekend thought that was really high number. (African-Americans make up 7 percent of California's population--and they tend to be a younger 7 percent, when compare to whites. A ten percent share would mean they were overrepresented by somewhere around 30 percent.) But let's say that 10 percent number is correct--it means that the (weighted?) sample size for African-Americans was about 224 people.
Now, these problems are generally true of all exit polls, and polling in general. I'm not writing this to impugn anyone's credibility. Opinion pollsters know that these are the hazards built into an imperfect science. So how do they get around them? Typically you look at a group of polls (as a lot of us did during this election) and you try to get some range of where things are. That's exactly what did not happen in the reporting around Prop 8. All of our analysis and blogging was based on a single poll. Think of it this way--What if we had just one poll for every state during the general, as opposed to people like Nate Silver looking at multiple polls? How different would our impressions be?