According to Andrew Ho, a professor at the Harvard Graduate School of Education, most standardized tests are designed so that the number of examinees who answer a given question correctly averages around 60 percent. “If 90 percent of students get a question right [on the SAT], then nine out of 10 of them are indistinguishable,” Ho said. “Such a question does little to distinguish their skills.” Tests need to be difficult, but there is no value, either, in making them so difficult that nine out of 10 students are getting a question wrong.
Even when the content on a particular test is harder, the job of a test-maker, Ho explained, is to make sure that the ultimate average score is the same regardless of the test. Testing companies, like College Board and ACT, have entire departments devoted to “equating,” the process of ensuring that it doesn’t matter whether you take the test in one month or another. “Say, on the old test, if you got a 90 percent, that might get you a 1500. There are dozens of people whose job it is to make sure that, if you got a 80 percent on the new test, you still get a 1500,” Ho said, adding that it’s impossible to do this perfectly. “This process assumes that you’re measuring the same [domain of content and skills], and the entire premise of the new test is that it’s measuring something related but ultimately different, ideally more relevant.”
The new SAT is different in many ways from the old model. To name just a few, the questions have four rather than five answer choices, there are fewer math concepts covered, and hard vocabulary is no longer directly tested. Comparing the two tests is like comparing apples to oranges. Instead, College Board has come up with calculations that allow colleges to compare scores on the new SAT to those on the old one. Its research has found, for instance, that a 730 on the new test’s math section is equivalent to a 700 on the old. The College Board is strongly encouraging admissions officers to use these formulas to compare applicants who took different tests, rather than look at percentiles.
The higher average scores and the overall rise in the performance percentiles on both the math and reading section of the new SAT have led some critics, such as Dan Edmonds of Noodle Education, to speculate that the College Board may be intentionally inflating scores to attract more students. In 2012, the ACT became the most popular college-admissions test in the country. Many of the changes the College Board had made to the test appear to be designed to make the SAT more attractive to students, states, and school districts, which are increasingly paying for students to take the exam during the school day.
There are, however, likely valid reasons to explain why the percentiles have floated upward. Students are no longer penalized for picking a wrong answer, for example; they also have more time to answer each question on the test. These factors led to fewer people getting lower scores, thus pushing the average up. As Adam Ingersoll of Compass Education Group explained in an email,“College Board has decided to accept this ‘natural’ lift resulting from changes to the test,” a move he described as “perfectly reasonable.”