College 2005 November 2005

What Does College Teach?

It's time to put an end to "faith-based" acceptance of higher education's quality
More

"What makes your college worth $35,000 a year?" It's a hard question for a college president to answer, especially because it's usually raised at gatherings for prospective students and their anxious, checkbook-conscious parents. But it also provides an opportunity to cast one's school in a favorable light—to wax eloquent about admissions selectivity, high graduation rates, small classes, and alumni satisfaction.

The harder question, though, comes when someone interrupts this smooth litany: "But what evidence is there that kids learn more at your school?" And as I fumble for a response, the parent presses on: "Are you saying that quality is really mostly a matter of faith?"

The only answer is a regretful yes. Estimates of college quality are essentially "faith-based," insofar as we have little direct evidence of how any given school contributes to students' learning.

This flies in the face of what most people believe about college, and understandably so. After all, if we don't know what makes a school good or bad, then the anxiety-driven college-application process is a terrible waste, the U.S. News & World Report rankings are a sham, and all the money lavished on vast library holdings, expensive computer labs, wireless classrooms, and famous faculty members is going for naught. And what about SAT scores, graduation rates, class sizes, faculty salaries, and alumni giving? Surely, a college-obsessed parent might object, such variables make some difference.

Perhaps they do—but if so, we haven't found a way to measure it. In How College Affects Students, a landmark review of thirty years of research on college learning, Ernest Pascarella and Patrick Terenzini found that simply going to college, any college, makes a major difference in a young person's psychological development: students come away with improved cognitive skills, greater verbal and quantitative competence, and different political, social, and religious attitudes and values. But although the researchers found wide variations in learning within each college or university, they were unable to uncover significant differences between colleges once the quality of the entering students was taken into account.

So it's not just a perverse status-consciousness that makes higher education the only industry in which competitors are rated on the caliber of their customers rather than on their product—or that drives U.S. News & World Report to rank colleges on how well they recruit and graduate already successful high schoolers. It's that we have no other discriminating way to measure collegiate quality.

It's possible that this situation reflects a real absence of variation—that there really isn't much difference between, say, an Ivy League education and four years at a middling private or state school. According to this explanation, faculty members across the country tend to graduate from a relatively small number of doctoral programs, use comparable textbooks, construct similar curricula, hold fairly low expectations for student achievement (particularly in an age of grade inflation), and labor under a system that rewards research over teaching. In this homogenized landscape the quality of entering students is the only thing that matters: "Diamonds in, diamonds out; garbage in, garbage out."

A second, more persuasive explanation, however, holds that current assessment measures simply can't pick up the differences in learning from one campus to another. And robust measurements don't exist in part because colleges don't want them—because developing and testing them would be expensive; because faculty members would disagree on what to measure; and because they're wary of anything that calls into question the long-running perception of American higher education as "world class."

But in an era when the importance of a college diploma is increasing while public support for universities is diminishing, such assessment is desperately needed. The real question is who will control it. Legislators are prepared to force the issue: Congress raised the question of quality during its recent hearings on the reauthorization of the Higher Education Act; all regional accrediting agencies and more than forty states now require evidence of student learning from their colleges and universities; and pressure is rising to extend a No Child Left Behind-style testing regime to higher education.

To date academe has offered little in response, apart from resistance in the name of intellectual freedom and faculty autonomy. These are legitimate professional prerogatives; but unless the academy is willing to assess learning in more rigorous ways, the cry for enforced accountability will become louder, and government intervention will become more likely.

Current measures of college quality fall into four major categories, outlined a few years ago by my colleague Marc Chun: actuarial data, expert ratings, student/alumni surveys, and the direct assessment of student performance. While each of these has its uses, none is anywhere close to being a legitimate measure of how much students learn over their college careers. Like the drunk who looks for his keys under the streetlamp because the light is better there, Chun has argued, colleges rely on these measures because they are inexpensive and readily available, not because they actually tell us much.

Actuarial data and expert ratings are familiar to anyone who has spent an afternoon leafing through the U.S. News rankings. The former consist of quantifiable information such as graduation rates; data on racial diversity, admissions selectivity, and research funding; student-teacher ratios; and SAT and ACT scores. These statistics are easy to gather, and have long been assumed to reflect institutional quality. But there is little evidence that the attributes they measure have a decisive impact on student learning.

Equally easy to compile are surveys of institutional quality, in which faculty members and administrators across the country are asked to rate their competitors, typically on a five-point scale. These surveys are interesting if not taken too seriously, but the participants may not know enough about other institutions to make such judgments, and the variables they find most noteworthy may not be the ones that are actually important.

Jump to comments
Presented by
Get Today's Top Stories in Your Inbox (preview)

Why Do Men Assume They're So Great?

Katty Kay and Claire Shipman, authors of this month's Atlantic cover story, sit down with Hanna Rosin to discuss the power of confidence and how self doubt holds women back. 


Elsewhere on the web

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

Where Time Comes From

The clocks that coordinate your cellphone, GPS, and more

Video

Computer Vision Syndrome and You

Save your eyes. Take breaks.

Video

What Happens in 60 Seconds

Quantifying human activity around the world

Writers

Up
Down

More in National

More back issues, Sept 1995 to present.

Just In