More promising are the surveys that ask students and recent graduates to assess their experiences. One of the most prominent and useful is the National Survey of Student Engagement (NSSE, pronounced Nessy), launched in 1999 and currently administered by 573 colleges and universities (see "What Makes a College Good?" by Nicholas Confessore, November 2003 Atlantic). NSSE asks students to rate their educational experience by reporting, for instance, on the quantity and quality of contact with the faculty and on how much homework they receive. Given that previous research shows a strong correlation between such educational "engagement" and learning, NSSE scores may be a better measurement of how well schools teach than many of the statistics that find their way into college rankings. But correlation isn't causation, and surveys like this one offer at best an indirect assessment of educational quality. Their findings rarely touch on either what was learned or, more important, what ought to have been learned.
Finally there is the direct assessment of student learning that takes place constantly on college campuses, usually symbolized by grades and grade point averages. For our purposes these are nearly useless as indicators of overall educational quality—and not only because grade inflation has rendered GPAs so suspect that some corporate recruiters ask interviewees for their SAT scores instead. Grades are a matter of individual judgment, which varies wildly from class to class and school to school; they tend to reduce learning to what can be scored in short-answer form; and an A on a final exam or a term paper tells us nothing about how well a student will retain the knowledge and tools gained in coursework or apply them in novel situations.
Nor can grades capture the cumulative effects of taking dozens of courses over a single four-year stretch. Sometimes the whole of a college education is less than the sum of its parts. Sometimes it's far greater. And in neither case does a student's GPA, whether 2.2 or 4.0, really tell us how much he or she has learned.
Just as challenging as the absence of reliable measures is the resistance to developing them within the academy itself. Even as the initiative for comprehensive educational assessment builds outside the university,
Their reasons are various. What is worth learning cannot be measured, some say, or becomes evident only long after the undergraduate years are over. Others claim that any kind of assessment is a threat to academic freedom and a power grab by administrators and legislators seeking to micro-manage instruction, impose a partisan agenda, or curry favor with voters by claiming to have brought "accountability" to higher education. And the academy has observed with alarm the problems states are having with K-12 assessment.
But perhaps myopia is operating here as well. No one doubts that professors care deeply about whether students learn what is taught in their courses. One suspects, however, that academic turf wars have a lot to do with why cumulative learning is rarely measured. Academics have trouble agreeing with their colleagues in the same field on what students ought to be taught, let alone with colleagues in other disciplines. As a result, to borrow from G. K. Chesterton, measuring cumulative learning hasn't been tried and found wanting; it has been found difficult and left untried.
Or again, the skeptics are right that what often passes for assessment, both in higher education and in grades K-12, too often trivializes learning. But that tells us only what is, not what can or ought to be. And it's ironic that academics so disdain the pursuit of data on the subject, given that the academy's culture of evidence is the enviable foundation for the greatest research universities in the world.
Finally, faculty members are perfectly correct to point out that a well-conceived assessment program would take considerable time, energy, and money. They are also correct that it would require a difficult rebalancing of research and teaching priorities. But perhaps such a rebalancing, with a renewed focus on undergraduate assessment and an end to the suffocating power of the research ethic, is exactly what universities need.
If assessment is to take hold in the university, however, it's crucial that the impetus for reform come from within. It's a terrible idea to have people outside the academy—whether consultants, politicians, or businessmen—telling professors how, what, and what not to teach.
Nonetheless, there are outside examples worth considering. For instance, a hardheaded assessment ethic makes a big difference in medicine, where survival rates for conditions such as colon cancer and cystic fibrosis can vary dramatically from hospital to hospital. The most successful hospitals are those that measure outcomes and give patients access to the information—which is exactly the model that higher education ought to follow.
A number of promising approaches are already moving academic "doctors" in this direction. At Carleton College, in Northfield, Minnesota, for example, faculty panels assess students' writing using samples from different courses. The portfolios are turned in at the end of every student's sophomore year—an ideal point for remediation. Preliminary reports suggest that the system has helped clarify the school's expectations and standards for faculty and students, and has improved students' writing. This kind of evaluation is of course time-consuming, and Carleton has the advantage of being a small school (1,800 students) with a low student-teacher ratio. Portfolio assessment is not limited to small colleges, however. For example, Washington State University, with 18,700 students on its Pullman campus, has developed a similar system that also incorporates a faculty-graded two-hour writing exam.
Another innovative way to assess overall student performance can be found at Alverno College, in Milwaukee, Wisconsin. Alverno's faculty has created an integrated liberal arts and professional-studies curriculum focused on abilities ranging from analysis and problem solving to effective citizenship and engagement with the arts. Students do not receive grades in the usual sense; instead entering students learn to assess their own course work, and also receive feedback from faculty members and from assessors in the local business and professional communities. Students keep their assignments and feedback, along with their self-evaluations, in electronic portfolios, to track their progress over time, and a faculty council monitors the quality of the assessment across majors.
Also promising is the movement toward "value-added" assessment, which attempts to measure what a particular college or university contributes to its students' knowledge and capabilities during their four or five years.