Contents | October 2002
More on politics and society from The Atlantic Monthly.
More by Cullen Murphy from The Atlantic Monthly.
From the archives:
"The Tests and the 'Brightest': How Fair Are the College Boards?" (June 1980)
The tests are the subject of a growing debate. Do they really discover the best and the brightest? Or do they chiefly identify the richest and the most expensively educated? By James Fallows
From Atlantic Unbound:
Flashbacks: "Educational Measures" (September 17, 1997)
There's much more to the questions about standards and testing than most politicians would have us believe.
The Atlantic Monthly | October 2002
bout 75 million Americans have taken the SAT for college admission since the Scholastic Aptitude Test (as it was then known) was first administered, in 1926, to 8,040 high school seniors. Last summer the College Board, which oversees the test, announced plans for significant revisions, scheduled to go into effect in the spring of 2005. The biggest change is the addition of an extemporaneous essay. Together with some grammar questions, the essay will constitute a third part of the test, joining the revised math and verbal sections. Students will be given an essay topic; some possibilities mentioned by the College Board include "Nothing requires more discipline than freedom," "The greatest griefs are those we cause ourselves," and "Novelty is too often mistaken for progress."
The Utmost Measures
A word in behalf of subjectivity
by Cullen Murphy
Educators will argue for years over the wisdom of the specific changes, and over what sorts of students stand to benefit and what sorts stand to lose. But two aspects of the new SAT are welcome developments on their face.
The first is that all those who have ever bragged about their SAT scores will suddenly see their claims undermined. Because there will soon be three sections, worth 800 points apiece, the best possible scores will total 2400; the former gold standard of 1600 appears deeply mediocre by comparison. Supplying an explanatory footnote—as history books do, say, to translate the value of ancient drachmas into modern dollars—will be awkward at best. The entire Baby Boom generation will be in the position of having to make the case to its grandchildren that back in the sixties or seventies an SAT score of 1490 or 1540 was actually not bad—like grandparents today who must patiently explain, to vaguely disbelieving young people, that "five thousand dollars was quite a lot of money in those days."
The second welcome development is the element of subjective judgment, which both the writing and the grading of the essays will require. Subjective judgment has been maligned for decades, on grounds of capriciousness and unfairness, and because it is "unscientific." Standardized testing was, of course, an attempt to get away from subjectivity. But every advance in testing, it seems, elicits the discovery of further flaws. This is the paradox of measurement: the more objective and precise we get, the more nimbly truth manages to keep a certain distance.
From the archives:
"If the GDP is Up, Why is America Down?" (October 1995)
Why we need new measures of progress, why we do not have them, and how they would change the social and political landscape. By Clifford Cobb, Ted Halstead, and Jonathan Rowe
anet L. Norwood, who served as the nation's Commissioner of Labor Statistics for nearly an eternity (well, 12.62 years), once complained, "The real problem is that people often want a number to tell them everything." She was referring to disagreements over the validity of seemingly objective economic measures—the unemployment rate and the poverty rate, the Consumer Price Index and the gross domestic product—but she might as well have been referring to an immutable trait of human character.
By chapter six in the Book of Genesis, three chapters after the expulsion from Eden, human beings must confront the importance of being able to measure things. The Lord commanded Noah to build an ark: "This is how you are to make it: the length of the ark three hundred cubits, its breadth fifty cubits, and its height thirty cubits." To which Noah replied, as recorded on Bill Cosby's first album, "Right. What's a cubit?" (It is the length from the crook of the elbow to the furthest fingertip.) Ever since, measurement has been extended to more and more phenomena—the mass of an electron, the size of the universe—and has become more and more refined.
From the archives:
"Who Owns Intelligence?" (February 1999)
Three unresolved issues will dominate the discussion of intelligence: whether intelligence is one thing or many things; whether intelligence is inherited; and whether any of its elements can accurately be measured. By Howard Gardner
The number of standardized ways in which a typical person is measured, beginning now with prenatal screening and continuing up through such things as "emotional intelligence" assays and "360-degree" performance reviews, must run into the dozens. Measurements of electrical patterns in the amygdala, a region of the forebrain, may soon reveal our innermost emotions. An automobile designed by Toyota in cooperation with Sony, called the Pod and intended to help control road rage, contains sensors to measure pulse rate and level of perspiration; at the first sign of trouble it begins to play soothing music and warns drivers to calm down. (Memo to Toyota: this will only make them madder.) Measurements are palpated for the subtlest insights. A recent study conducted by clinicians in Toronto explored the relationship between high status and good health by comparing the longevity of Oscar winners with that of other actors and actresses. (Oscar winners live, on average, 3.9 years longer.) In another recent study researchers at the University of California at Berkeley randomly selected 720 people at street festivals in San Francisco, asked them about their sexual orientation, and measured their fingers. Lesbians, the researchers concluded, are more likely to have index fingers that are unusually short relative to their ring fingers.
Everyone knows that many types of measurement are at best crude constructs, especially when it comes to human psychology and well-being. But even the brute physical world remains mysteriously elusive. Fingerprints have been a bedrock of forensic evidence for decades—but the reliability of fingerprint analysis in some circumstances has lately been called into question. If anything ought to stay still long enough to be precisely measured, it is nature's physical "constants." But it turns out that our values for such things as the gravitational constant, the fine-structure constant, and even the speed of light may not be as solid as one would wish. "The constants of nature could be lawless variables," a physicist from London's Imperial College told a conference earlier this year.
In the everyday world, too, standard methods of measurement have been found to fall short. Last year the National Weather Service announced that the formula for the wind-chill factor had been somewhat inaccurate ever since it was adopted, in 1973, and that a new formula would be used in the future. Many meteorologists agree that the heat index, the wind-chill factor's warm-weather counterpart, could also benefit from remedial attention.
No one would argue for scrapping society's vast accumulated infrastructure of measurement. Indeed, there may be some new statistical indices we'd all be grateful to have. For instance, it would be easy enough to devise an accuracy-of-prognostication index for newspaper columnists, perhaps a little number that would appear right after the byline: "by Robert Novak (1.7)"; "by David S. Broder (7.6)." But at the same time, it might be worth giving subjective judgment more weight. Subjective judgment, after all, is what gives us epigrams. It is the methodology that informs such phrases as "gut reaction" and "cut of his jib." It is why NASA still uses noses rather than machines to decide which smells will prove intolerable in space. It often captures truth more fully than any measurement can.
s the College Board has shown, objective measures can easily be supplemented with something more individual and illuminating—namely, a short essay.
Imagine, say, that Albert Camus was a television meteorologist. After reporting the heat index, and maybe comically mopping his brow, he would tell us what being outside in such heat actually felt like:
I was surprised at how fast the sun was climbing in the sky. I noticed that for quite some time the countryside had been buzzing with the sound of insects and the crackling of grass. The sweat was pouring down my face ... The glare from the sky was unbearable. At one point, we went over a section of the road that had just been repaved. The tar had burst open in the sun. Our feet sank into it, leaving its shiny pulp exposed. [The Stranger]
Or the meteorologist could be Jack London. After he finished telling us the wind-chill factor, with an affable on-camera shudder and a sideways grin at the news anchor, he might also explain what that terrible degree of cold really did to a person:
It was surprising, the rapidity with which his cheeks and nose were freezing. And he had not thought his fingers could go lifeless in so short a time. Lifeless they were, for he could scarcely make them move together to grip a twig, and they seemed remote from his body and from him. When he touched a twig, he had to look and see whether or not he had hold of it. ["To Build a Fire"]
This approach could usefully augment many types of measured assessment—the latest unemployment figures, an electrocardiogram, a new estimate of the age of the universe. I put it forward with a certain hesitation, aware that novelty is too often mistaken for progress.
What do you think? Discuss this article in the Politics & Society conference of Post & Riposte.
Cullen Murphy is The Atlantic's managing editor.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; October 2002; Innocent Bystander; Volume 290, No. 3; 18-20.