Pushback on NPR vs. Fox

More

I get off a connecting flight in Newark, en route to Shanghai, to see a mailbox full of notes questioning an item from last night. That item was based on a chart appearing to show that Fox News viewers overall did worse on a test of public-affairs factual knowledge than those who got their news elsewhere, or even than those who said they didn't watch the news at all.

Here's the most fully argued version of the comments I've received, from a reader in New York. All emphasis in original:

I've been following your "False Equivalence" series and have generally enjoyed and agreed with your insights, but I fear you may have jumped to a possibly unfounded conclusion on this one.  I'm a statistician by trade and have worked with various US government statistics departments the past and current work for an international organization.  Though I find these results entertaining from a media frenzy point of view, a number of alarm bells go off right away when I see this survey.  In ascending order of what bothered me most (with the relevant survey disclaimer quotes in italics):

    1.    It was conducted as a telephone survey.  "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, non-response (eligible individuals refusing to be interviewed)....." .  With caller ID these days what are chances that randomly chosen people would pick up for an unknown number?  And of those that pick up, how many are likely to agree to talk on the phone for 10 minutes to complete a survey such as this?  I would surmise that the response rate was quite low (I didn't see any documentation in the report).  A low response rate raises the possibility of nonresponse bias -  the possibility that certain demographic types would be undersampled.  The report states that responses were reweighted to account for discrepancies in race, age and gender proportions as compared to the national average, but presumable there are other factors that go into nonresponse bias. 

    2.    Only 8 questions were asked.  "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, ..... question wording, the order in which questions are asked, and variations among interviewers." This is a structural bias issue.  For example, what if Fox News reported particularly poorly on one or more of the topics included in the survey, but reported much better on some other topics not included?  While I don't see any inherent bias in the questions that doesn't mean there isn't any.  How were the questions selected?  Did both liberals, conservatives and centrists screen them for bias?  And how well the result of 8 random news questions relate to "what you know" anyway?

    3.    The deep breakdown of data in the survey.  1,185 people sounds like a lot, but when it is broken down to such a low level the sample size dwindles.  The graph that you use in your post shows the average number of questions answered correctly by respondents who reported getting their news from just this source in the past week.  So of the 1,185, how many watched Fox News and not any of the other sources listed?  MSNBC?  I would think that most people get their news from multiple sources (local news AND Fox News for example).  These people are apparently excluded from the analysis.  Presumably, the remaining sample could be quite small.  Which leads to the possibly most important issue:

    4.    Lack of standard errors on the correct answers statistic. "The margin of error for a sample of 1185 randomly selected respondents is +/- 3 percentage points. The margin of error for subgroups is larger and varies by the size of that subgroup." The size of the subgroups on which the graph is based are not mentioned.  Also +/- 3 percentage points does not apply to the number of questions answered correctly.  I do not see evidence of statistical testing to show there are significant differences by respondents reporting receiving their news from different sources (though I suppose there's a chance it may just not have been mentioned in the report).

While I'm not sure that the team at Farleigh Dickinson could have done a much better job than they did with their resources, I think this type of survey does not rise to level of "news" (nor do most soft surveys like this).  It is extremely easy to jump to conclusions based on a graph that agrees with one's inklings about news sources even when the data behind it may not lend itself to clear cut conclusions.  Another thing that should be noted is the issue of causality.  You note in your post "that NPR aspires actually to be a news organization and provide 'information', versus fitting a stream of facts into the desired political narrative"  While this could be true, it is also possible that even if the survey results were correct there may be a bit of self-selection when choosing news networks.  In that case, ignorance could be the viewer's fault rather than the fault of Fox News.

These are convincing points; I am sorry if I passed this chart along too eagerly and credulously, without reading the caveats. I have been big on the theme that reporters / commentators should not so often rush to conclusions and should instead be more aware of what they/we do not know. Conveniently and in my public-spirited way, I have now provided an illustration of this tendency myself. On the other hand, I do very much re-suggest consideration of the important  false equivalence item from masscommons I mentioned last night.

FInally a sample of another recurring theme:

I take some exception to this post, on how Fox viewers answer fewer questions correctly than NPR viewers. I'll bet that Fox viewers tend to be more conservative than NPR listeners. Conservatives tend to be less educated than liberals, and less educated people probably know less about current events.

There are any number of correlations that could be involved in driving this result, and until those are explored the only safe accusation you can make is that Fox attracted less informed viewers than NPR, not that Fox provides less information. That might be true, and your opinion, but this isn't proper evidence for it.
Presented by

James Fallows is a national correspondent for The Atlantic and has written for the magazine since the late 1970s. He has reported extensively from outside the United States and once worked as President Carter's chief speechwriter. His latest book is China Airborne. More

James Fallows is based in Washington as a national correspondent for The Atlantic. He has worked for the magazine for nearly 30 years and in that time has also lived in Seattle, Berkeley, Austin, Tokyo, Kuala Lumpur, Shanghai, and Beijing. He was raised in Redlands, California, received his undergraduate degree in American history and literature from Harvard, and received a graduate degree in economics from Oxford as a Rhodes scholar. In addition to working for The Atlantic, he has spent two years as chief White House speechwriter for Jimmy Carter, two years as the editor of US News & World Report, and six months as a program designer at Microsoft. He is an instrument-rated private pilot. He is also now the chair in U.S. media at the U.S. Studies Centre at the University of Sydney, in Australia.

Fallows has been a finalist for the National Magazine Award five times and has won once; he has also won the American Book Award for nonfiction and a N.Y. Emmy award for the documentary series Doing Business in China. He was the founding chairman of the New America Foundation. His recent books Blind Into Baghdad (2006) and Postcards From Tomorrow Square (2009) are based on his writings for The Atlantic. His latest book is China Airborne. He is married to Deborah Fallows, author of the recent book Dreaming in Chinese. They have two married sons.

Fallows welcomes and frequently quotes from reader mail sent via the "Email" button below. Unless you specify otherwise, we consider any incoming mail available for possible quotation -- but not with the sender's real name unless you explicitly state that it may be used. If you are wondering why Fallows does not use a "Comments" field below his posts, please see previous explanations here and here.
Get Today's Top Stories in Your Inbox (preview)

Juice Cleanses: The Worst Diet

A doctor tries the ever-popular Master Cleanse. Sort of.


Elsewhere on the web

Video

Juice Cleanses: The Worst Diet

A doctor tries the ever-popular Master Cleanse. Sort of.

Video

Why Did I Study Physics?

Using hand-drawn cartoons to explain an academic passion

Video

What If Emoji Lived Among Us?

A whimsical ad imagines what life would be like if emoji were real.

Video

Living Alone on a Sailboat

"If you think I'm a dirtbag, then you don't understand the lifestyle."

Feature

The Future of Iced Coffee

Are artisan businesses like Blue Bottle doomed to fail when they go mainstream?

Writers

Up
Down

More in Politics

From This Author

Just In