Imagine for a moment that a rich, innovative company is looking to draft the best and brightest high-school grads from across the globe without regard to geography. Let’s say this company’s recruiter has a round-the-world plane ticket and just a few weeks to scout for talent. Where should he go?
Our hypothetical recruiter knows there’s little sense in judging a nation like the United States by comparing it to, say, Finland. This is a big country, after all, and school quality varies dramatically from state to state. What he really wants to know is, should he visit Finland or Florida? Korea or Connecticut? Uruguay or Utah?
Interactive Graphic: "How Your State Rates"
Compare US education data to the rest of the world.
Stanford economist Eric Hanushek and two colleagues recently conducted an experiment to answer just such questions, ranking American states and foreign countries side by side. Like our recruiter, they looked specifically at the best and brightest in each place—the kids most likely to get good jobs in the future—using scores on standardized math tests as a proxy for educational achievement.
We’ve known for some time how this story ends nationwide: only 6 percent of U.S. students perform at the advanced-proficiency level in math, a share that lags behind kids in some 30 other countries, from the United Kingdom to Taiwan. But what happens when we break down the results? Do any individual U.S. states wind up near the top?
Incredibly, no. Even if we treat each state as its own country, not a single one makes it into the top dozen contenders on the list. The best performer is Massachusetts, ringing in at No. 17. Minnesota also makes it into the upper-middle tier, followed by Vermont, New Jersey, and Washington. And down it goes from there, all the way to Mississippi, whose students—by this measure at least—might as well be attending school in Thailand or Serbia.
Hanushek, who grew up outside Cleveland and graduated from the Air Force Academy in 1965, has the gentle voice and manner of Mr. Rogers, but he has spent the past 40 years calmly butchering conventional wisdom on education. In study after study, he has demonstrated that our assumptions about what works are almost always wrong. More money does not tend to lead to better results; smaller class sizes do not tend to improve learning. “Historically,” he says, “reporters call me [when] the editor asks, ‘What is the other side of this story?’”
Over the years, as Hanushek has focused more on international comparisons, he has heard a variety of theories as to why U.S. students underperform so egregiously. When he started, the prevailing excuse was that the testing wasn’t fair. Other countries were testing a more select group of students, while we were testing everyone. That is no longer true: due to better sampling techniques and other countries’ decisions to educate more of their citizens, we’re now generally comparing apples to apples.
These days, the theory Hanushek hears most often is what we might call the diversity excuse. When he runs into his neighbors at Palo Alto coffee shops, they lament the condition of public schools overall, but are quick to exempt the schools their own kids attend. “In the litany of excuses, one explanation is always, ‘We’re a very heterogeneous society—all these immigrants are dragging us down. But our kids are doing fine,’” Hanushek says. This latest study was designed, in part, to test the diversity excuse.
To do this, Hanushek, along with Paul Peterson at Harvard and Ludger Woessmann at the University of Munich, looked at the American kids performing at the top of the charts on an international math test. (Math tests are easier to normalize across countries, regardless of language barriers; and math skills tend to better predict future earnings than other skills taught in high school.) Then, to get state-by-state data, they correlated the results of that international test with the results of the National Assessment of Educational Progress exam, which is given to a much larger sample in the U.S. and can be used to draw statewide conclusions.
The international test Hanushek used for this study—the Programme for International Student Assessment, or PISA—is administered every three years to 15-year-olds in about 60 countries. Some experts love this test; others, like Tom Loveless at the Brookings Institution, criticize it as a poor judge of what schools are teaching. But despite his concerns about PISA, Loveless, who has read an advance version of Hanushek’s study, agrees with its primary conclusion. “The United States does not do a good job of educating kids at the top,” he says. “There’s a long-standing attitude that, ‘Well, smart kids can make it on their own. And after all, they’re doing well. So why worry about them?’”
Of course, the fact that no U.S. state does very well compared with other rich nations does not necessarily disprove the diversity excuse: parents in Palo Alto could reasonably infer that California’s poor ranking (in the bottom third, just above Portugal and below Italy) is a function of the state’s large population of poor and/or immigrant children, and does not reflect their own kids’ relatively well-off circumstances.
So Hanushek and his co-authors sliced the data more thinly still. They couldn’t control for income, since students don’t report their parents’ salaries when they take these tests; but they could use reliable proxies. How would our states do if we looked just at the white kids performing at high levels—kids who are not, generally speaking, subject to language barriers or racial discrimination? Or if we looked just at kids with at least one college-educated parent?