Standardized Testing Takes Another Hit, Cont'd

Editor’s Note: This article previously appeared in a different format as part of The Atlantic’s Notes section, retired in 2021.

A few readers emailed their thoughts over my note on the falling mean scores on the SAT nationwide. Here’s Abigail Haddad:

You introduce a very plausible cause for the lower scores but apparently don’t realize it:

The mean scores for reading, math, and writing for the Class of 2015 were 495, 511, and 484, respectively—with each category down by about two points from the previous year. The shift is subtle but significant, particularly considering a record number of students took the exam this past round.

The shift in scores is actually made less, rather than more, significant by that record number of students. This increase in test-takers suggests that the issue may be not that high school seniors are getting worse at whatever the SAT is measuring, but rather that students who would not previously have taken the SATs are now taking them, and those teenagers are worse at whatever the SAT is measuring. The only way we would expect this not to have explanatory power would be if the “new” SAT takers don’t perform worse than the “old” SAT takers, which seems very implausible.

The way to evaluate how much of an effect additional SAT-takers are having would be to look at the respective distributions of scores in the most recent data vs. previous data. My hypothesis predicts that the ratio of high-scorers to 17-year-old Americans hasn’t really changed. That is, high-scorers were taking it before and they continue to take it; there are just more low-scorers relative to the number of 17-year-olds as we’ve added more test-takers. I don’t know if College Board has put out this data yet—if they haven’t, I’m sure they will soon.

My reader makes a valid point, and I agree that the growth in the test-taking population may help explain why the scores have slipped. But that doesn’t make them less significant: A larger sample size means the SAT scores are that much more salient, as its results are more representative of the nation’s students than ever before.

What makes the Class of 2015’s scores even more noteworthy is the context.

For one, performance on the National Assessment of Educational Progress—which is administered by the federal government to gauge proficiency rather than eligibility and is taken by a cross-section of students—is equally discouraging. For another, the country is in the midst of a full-throttle initiative to, yes, boost the number of kids who are “college-and-career ready” by the time they finish high school. According to the College Board, only 42 percent of the Class of 2015 met that benchmark.

That said, it seems the reader is correct in his or her hypothesis: The ratio of high scorers to the total number of 17-year-old Americans hasn’t changed much at all, according to the College Board’s data. And ultimately, perhaps the small score fluctuations are a distraction.

I followed up with the College Board after writing the initial note and got a response from Cyndie Scheiser, the company’s chief of assessment:

Frankly, we’re more concerned that the percentage of students who graduate high school ready for college hasn’t increased in the past five years. Simply doing the same things we have been doing is not going to improve these numbers. This is a call to action to do something different to propel more students to readiness.

(Scheiser then plugged the redesigned version of SAT—an overhauled exam that’s being launched next year and is projected by critics to make quality higher education even more out of reach for disadvantaged students.)

Another reader, Chris Blazier, shares his informed view:

Hello, I am a postdoctoral researcher in biology who has taught PSAT/SAT classes on the side for over a decade. You never indicate in your piece how the decline of students’ scores indicates anything bad about testing. In fact, the alternative explanation seems more likely: the education system that produces the students who are scoring worse on standardized tests is to blame for poor performance.

Standardized testing isn’t perfect, but it is consistent, and the material it covers is pretty cut and dried. As a teacher, I focus most on the so-called Writing Section, since it is the easiest to improve students’ scores on. We do a thorough review of English grammar: Everyone comes out of the course knowing the difference between a participle and a gerund, how to maintain parallelism, how to avoid mistakes in subject-verb agreement and problems with verb tenses. As much as possible, I draw comparisons with Spanish, which most students learn here in Texas either at home or in school. I see no evil in the notion of anyone “teaching to the test” when the material is basically math, grammar, and critical reading.

You seem to find ammunition against standardized testing in the fact that minority students tend to do worse on standardized tests. I can tell you from my experience teaching for these tests, as well as my experience tutoring at public schools in Texas, that minority students tend to do worse because their parents tend to be less affluent and well educated and because the students attend worse schools starting from kindergarten. I don’t see how you can blame this kind of obvious, systematic disadvantage on standardized testing. You are just shooting the messenger.

I’m sure you have more objections to testing, but I have seen no better system than standardized testing for college admissions. The University of Texas tried to admit the top 10% of students from every school in Texas, with disastrous results. The quality of schools in Texas, like in much of the nation, is quite heterogeneous. Students in the top 10% from very bad schools in Texas were being admitted to UT-Austin and doing very poorly, and this was bad for everyone.

Standardized testing provides some standard across our heterogeneous educational system, and I don’t know of any other proposed method to do this.

I agree—the news about the drop in SAT scores isn’t, at face value, a reflection of how the exam is faring as an educational tool. But as Scheiser herself indicated, the College Board sees its role in education as going beyond simply administering exams. So, testing stigmas aside, that in itself would make the news a negative development for the SAT. As Scheiser put it:

That’s why we’re redesigning the SAT to focus on the few things research and evidence show matters most for college success; provide free practice to help improve those skills; and deliver opportunities for students to succeed in college and careers. It will take time to improve these numbers, but we’re deeply committed to making progress.