Yesterday we wrote about a study which found that people who watch no news were better informed than those who exclusively watched Fox News. And yes, we heard your complaints.
From comments questioning the validity of the study, to those pondering the scientific method and margin for error--our comment thread was flooded with outrage (ranging from mild to a burning). And hey, we even said that some of those questions were quite difficult.
So let us point you to The Atlantic's James Fallows, who found one of the most well-argued counter-arguments to the study. It also happens to address two gripes a lot of our commenters had with the study. Fallows highlights the complaint:
The deep breakdown of data in the survey. 1,185 people sounds like a lot, but when it is broken down to such a low level the sample size dwindles. The graph that you use in your post shows the average number of questions answered correctly by respondents who reported getting their news from just this source in the past week. So of the 1,185, how many watched Fox News and not any of the other sources listed? MSNBC? I would think that most people get their news from multiple sources (local news AND Fox News for example). These people are apparently excluded from the analysis. Presumably, the remaining sample could be quite small ...
Lack of standard errors on the correct answers statistic. "The margin of error for a sample of 1185 randomly selected respondents is +/- 3 percentage points. The margin of error for subgroups is larger and varies by the size of that subgroup." The size of the subgroups on which the graph is based are not mentioned. Also +/- 3 percentage points does not apply to the number of questions answered correctly. I do not see evidence of statistical testing to show there are significant differences by respondents reporting receiving their news from different sources (though I suppose there's a chance it may just not have been mentioned in the report).
For Fallows' full piece and argument head on over to The Atlantic.