Of the many statistics publicized by universities and the college guides that evaluate them, few receive as much attention as those that measure the difficulty of admission. "Selectivity" data—ranging from simple admission rates to statistical profiles of the academic achievement of each school's freshman class—would seem to be useful enough: they provide students with a quick notion of whether their credentials might be a good match for any given school. But these statistics also have an almost fetishistic appeal, as if the more students a school turns away, the nobler the character of the few it admits.
How much does a school's selectivity reveal about the quality of the school or of the students who attend it? Does selectivity mean much of anything at all?
By way of experiment, The Atlantic gathered data on America's most selective schools and created a ranking of the top fifty. The ranking is derived entirely from three variables that college admissions officers commonly say are most indicative of a school's competitiveness: the school's admission rate, and the SAT scores and high school class rank of matriculating freshmen. For the purposes of this illustration the variables were weighted equally to produce a rank for each school for 2002 and certain previous years as well.
A cursory examination reveals a few surprises (the Coast Guard Academy makes the list, whereas West Point does not), but only a few. (And in fairness to West Point, it just missed the list, ranking fifty-fifth.) Generally, the expected schools are in more or less the expected places. The "Big Three" Ivies, plus MIT, Caltech, and Stanford, stand at the summit; they are followed in the top twenty by other Ivies, the "Little Ivies" (Swarthmore, Amherst, Williams), and a few other schools.
Perhaps the most fundamental myth of selectivity is that those admitted to a school somehow represent the apex of the applicant group—that they are the best of the best. In truth, many elite schools could fill their classes twice over from the pool of rejected applicants and suffer no decline in quality. Emory's admissions dean estimates that between 50 and 60 percent of the applicants that Emory rejected last year were statistically as strong as those offered admission. The students on Duke's waiting list typically have better SATs and higher class ranks than the students who end up enrolling. Swarthmore this year rejected 62 percent of applicants with an 800 verbal SAT score and 58 percent of those with an 800 math score. Last year Notre Dame rejected 39 percent of the high school valedictorians who applied.
The neat hierarchy of selectivity begins to fall apart in other ways when one looks more closely, and considers applicants as what they are: individuals with different characteristics, applying at different times. Take sex. Overall, MIT is No. 1 among the top fifty selective schools, and Swarthmore is No. 10. Yet a woman applying to both schools would find Swarthmore considerably harder to get into than MIT. (This is because MIT, like many schools of technology, needs to work hard to get women into its classes.) On the whole, though, a plurality of elite schools are slightly more selective with regard to women than to men, simply because women have come to outnumber men in applicant pools.
Selectivity also varies according to when one applies—that is, whether through a school's early-decision program or during the regular admissions season. At the top schools in particular it is not unusual for early-decision applicants to be accepted at a rate two or three times that for regular-season applicants. At Princeton only eight percent of the candidates who applied during the regular admissions cycle in 2001 were accepted, but 31 percent of those who applied through the school's early-decision program were accepted—an admission rate closer to that of USC or Boston College than of MIT or Harvard. Schools sometimes claim that their early-decision candidate pools are typically stronger than their regular-admissions pools, so one should expect some difference in the admission rates. But a recent study of fourteen highly selective schools by researchers at Harvard determined that on average, early-decision candidates had slightly lower SAT scores and class ranks than candidates who applied during the regular admissions season. The same study found that by applying early, students improved their chances of admission to a school by about as much as they would have if they'd scored 100 points higher on the SAT.
The fact is, neither selectivity rankings nor aggregate statistics for any school can really tell candidates with any degree of precision the likelihood that they will be admitted. Owing to many factors other than the ones cited above, including the desire to admit athletes of a certain caliber, to admit the children of alumni, and to have diverse student bodies, the selectivity of the very top schools is highly uneven. That unevenness—the different standards applied to different types of candidates being evaluated by any single school on this list—may well swamp any differences among schools' overall selectivity scores.
If what the rankings conceal is telling, so, too, is how they change over time—or, rather, how they fail to change. (When enduring change does occur, the explanation may be cultural and reflect nothing about a school's inherent quality. America's urban renaissance, for instance, which has made city schools more attractive to students, might lie behind the steady rise of Columbia and Penn and the slow fall-off by Dartmouth and Cornell.) One of the more striking characteristics of the top-fifty list is how chronological it turns out to be. That is, one good predictor of a school's selectivity rank is nothing more complicated than the date of its founding. The average founding years of the top five, ten, twenty-five, fifty, and 100 most selective schools in the nation are 1767, 1785, 1822, 1839, and 1850, respectively. Eight of the first ten schools established in the United States are today among the nation's fifty most selective schools. Of the 128 doctoral universities, liberal-arts colleges, and service academies founded in the whole of the past century, only six can make the same claim. For all the talk of "hot" and "cold" schools, it is difficult to think of another industry in which historical legacy carries as much weight, or brand image appears as immutable. All but nine of the top fifty schools in 1992 are still in the top fifty today; and only four schools have broken in or out of the top twenty-five during the same period. By comparison, the top fifty companies in Forbes's 2002 ranking of the nation's largest companies are significantly different from the fifty that topped the 1992 list. (Twelve companies were acquired in the intervening years. And thirteen—more than one third of the remainder—have fallen off the list.) Against the backdrop of an economic culture built around competition and creative destruction, the hierarchy of elite schools appears curiously frozen in time.
And yet, at least in economic terms, a brand-name education appears to mean less than people think. In a 1999 study conducted for the National Bureau of Economic Research, Alan B. Krueger and Stacy Berg Dale showed that the selectivity of the schools students attended—given students of similar background and aptitude—made little difference in terms of their later earnings. Specifically, Krueger and Dale compared people who had attended highly selective schools with people who had gotten into comparable schools but had chosen less selective ones. They found that those in the former group earned no more than those in the latter.
Still, there is something inherently attractive about trying to rate schools based on their selectivity. Such a rating seems to provide clarity. But the clarity is an illusion. There may be good reasons for an individual student to prefer Harvard, the fifth school on this list, to Colgate, the fiftieth. But the fact that Harvard (on average) is harder to get into should not be predominant among them.