When Reed's former president Steven Koblik decided to stop submitting data to U.S. News, he asked the magazine simply to omit Reed from its listings. Instead the editors arbitrarily assigned the lowest possible value to each of Reed's missing variables, with the result that Reed dropped in one year from the second quartile to the bottom quartile. After the predictable outcry, U.S. News purportedly began to rank Reed based on information available from other sources. In subsequent years that procedure usually placed the college somewhere in the middle of the second quartile, with a footnote stating that we "refused to fill out the U.S. News statistical survey," and claiming to base the ranking on data from published sources. But since much of the information needed to complete the magazine's ranking algorithm is unpublished, one can only guess how the editors arrive at a value.
Reed's experience has not gone unnoticed. In a recent conversation with me the president of a leading liberal arts college lamented the distortions and deceptions that the ranking process engenders. When I suggested that he follow our example, he replied, "We can't. They will just plug in their own data, and we'll drop ten places in the rankings!" Criticism of the rankings is nearly unanimous, but so is compliance with them. According to the latest statistics supplied by U.S. News, only five percent of surveyed colleges and universities fail to submit the statistical questionnaire. In the words of another of my fellow presidents, "The rankings are merely intolerable; unilateral disarmament is suicide."
Far from committing suicide, Reed College has survived. Indeed, it has thrived. Over the past ten years the number of applicants has increased by 27 percent, and the quality of entering students, as indicated both by conventional SAT and GPA measures and by Reed's internal "reader rating" system, has steadily increased—it is far higher than suggested by our nominal place in the U.S. News pecking order. More important, Reed continues to offer an academic program widely recognized for its uncommon rigor, intellectual structure, and theoretical depth. Its students continue at unusually high rates to participate in faculty research and to earn competitive prizes and fellowships. The college continues to set the pace in the percentage of its graduates who go on to earn a Ph.D.
At professional meetings my colleagues often ask, "What is life like outside the rankings rat race?" and "How has Reed survived?"
Not cooperating with the rankings affects my life and the life of the college in several ways. Some are relatively trivial; for instance, we are saved the trouble of filling out U.S. News's forms, which include a statistical survey that has gradually grown to 656 questions and a peer evaluation for which I'm asked to rank some 220 liberal arts schools nationwide into five tiers of quality. Contemplating the latter, I wonder how any human being could possess, in the words of the cover letter, "the broad experience and expertise needed to assess the academic quality" of more than a tiny handful of these institutions. Of course, I could check off "don't know" next to any institution, but if I did so honestly, I would end up ranking only the few schools with which Reed directly competes or about which I happen to know from personal experience. Most of what I may think I know about the others is based on badly outdated information, fragmentary impressions, or the relative place of a school in the rankings-validated and rankings-influenced pecking order.
A somewhat more important consequence of Reed's rebellious stance is the freedom from temptation to game the ratings formula (or, assuming that we would resist that temptation, from the nagging suspicion that we were competing in a rigged competition). Since the mid-1990s numerous stories in the popular press have documented how various schools distort their standard operating procedures, creatively interpret survey instructions, or boldly misreport information in order to raise their rankings. Such practices have included failing to report low SAT scores from foreign students, "legacies," recruited athletes, or members of other "special admission" categories; exaggerating per capita instructional expenditures by misclassifying expenses for athletics, faculty research, and auxiliary enterprises; artificially driving up the number of applicants by counting as a completed application the first step of a "two-part" application process; and inflating the yield rate by rejecting or wait-listing the highest achievers in the applicant pool (who are least likely to come if admitted). Rumors of these practices and many others like them were rampant in education circles in the early years of formulaic ranking. I was struck, however, in reading a recent New York Times article, by how the art of gaming has evolved in my former world of legal education, where ranking pressure is particularly intense. The Times reported that some law schools inflate their graduate-employment rates by hiring unemployed graduates for "short-term legal research positions." Some law schools have found that they can raise their "student selectivity" (based in part on LSAT scores and GPAs for entering students) by admitting fewer full-time first-year students and more part-time and transfer students (two categories for which data do not have to be reported). At least one creative law school reportedly inflated its "expenditures per student" by using an imputed "fair market value," rather than the actual rate, to calculate the cost of computerized research services (provided by LexisNexis and Westlaw). The "fair market value" (which a law firm would have paid) differed from what the law school actually paid (at the providers' educational rate) by a factor of eighty!