• Illegal Whaling

    Typewriter Man

    In his article "Typewriter Man" (November Atlantic), Ian Frazier professed puzzlement over why early typewriter designers came up with the awkwardly arranged QWERTY keyboard, which persists. The information booklet that comes with the Mavis Beacon typing tutorial software (The Software Toolworks, Inc., 1987 - 1993) offers the following explanation:

    The Type-Writer, invented in 1872 by Christopher Latham Sholes, had a lot of trouble with the keys jamming if they were hit too closely in succession. Unable to redesign the machine to work faster, Sholes in desperation took a step calculated to slow the typist instead. After much experimentation, he produced an arrangement of keys so inconvenient, annoying, and troublesome that it could slow down even the most expert typist. The result was the QWERTY keyboard (after the first six letters of the typewriter's third row) which, with a few minor changes, still confounds typists today.

    Kathleen Thorne

    "Today no one can say for sure why Glidden and Sholes arranged the keys that way," Ian Frazier says of the standard typewriter keyboard, but he might have given the explanation most generally accepted as logical: the aim was to provide the most convenient fingering while separating the hammers so that they could rise and fall without getting in one another's way.

    Note that such common combinations as s-a-w, w-e-r-e, and t-h are conveniently clumped, and i-t and o-u-t are handy. On my Remington Rand, the same model I used in the Second World War, the hammers for w and s adjoin, but the a hammer is third to their left, so that it can drop away from the other two.

    It is often noted that the typist's left hand has more work than the right. This may be related to right-brain control but is more likely related to the right-hand job of returning the carriage and moving the roller without the later convenience of the space lever.

    Leon Lukaszewski

    I've read somewhere that the QWERTY arrangement is quite deliberate. The explanation has such a true capitalistic ring to it that I've always thought it to be accurate. Supposedly an early manufacturer of typewriters (possibly Glidden and Sholes) quickly found his machines in great demand. All resources were devoted to maximum production and none to R&D. The machines were originally produced with a keyboard that lent itself quite naturally to finger dexterity of the human hand. Women using these ancient machines rapidly acquired such proficiency that the machines were not capable of keeping up with the typing speeds. Rather than do a major engineering overhaul of the machine, which was making the manufacturer wealthy, a quick and deliberate fix was instituted. The manufacturer determined which letters were most common in English and ranked them all by frequency of appearance in the language. He then assigned the most frequent letters to the least dexterous fingers; hence typists were (at least temporarily) slowed down by this ergonomically inefficient arrangement. Sales of the machine continued until a better, competing design appeared on the market. The QWERTY keyboard is the odd and lingering result of this capitalistic subterfuge.

    Wilson M. Hancock


    Explanations for the arrangement of the QWERTY keyboard remain at the level of myth and speculation. Michael Adler, a leading historian of the manual typewriter, dismisses as "quite simply nonsense" the theory that the letters were arranged in order of frequency to prevent type-bar clash: "If it had indeed been Sholes' intention to place the most frequently used letters as far apart as possible," Adler has written, 'he should have positioned E and T diametrically opposite each other, not virtually adjacent to each other, with E and R as far apart as possible as well." As for the theory that the keyboard was deliberately made inconvenient to slow typists down, Adler points out that slow, two-finger typing was the standard method for at least a decade after the typewriter and its QWERTY keyboard came along. The mind craves explanation, and the notion that the QWERTY keyboard was a clever subterfuge has obvious appeal. But Sholes and Glidden left no evidence of what they had in mind with QWERTY, and we may never know for sure.

    Are Schools Failing?

    Peter Schrag's "The Near-Myth of Our Failing Schools" (October Atlantic) paints too rosy a picture. Like Admiral Hyman Rickover, who is cited in the article, Schrag depends exclusively on standardized tests and other "outsiders'" evidence. Whereas Rickover used such information to get too gloomy about American schools, Schrag's reliance on it blinds him to depressing classroom realities.

    (1) The "whatever" attitude. As in "No, Julius Caesar couldn't have been mad at Shakespeare's portrayal of him; he died more than 1,500 years before Shakespeare wrote the play." "Whatever." The student posture is languorous and aloof, connoting "We both know I have to be here; I'll do what you tell me to, but don't make me think about anything or pretend I care."

    (2) The horror of "acting white." As in "Participating in class or doing homework or studying for tests brands me as a traitor to my peer group and my culture. The system is set up to make white kids look smarter than we are. I'm no sucker; I'm not playing your game."

    (3) The distraction of "theory": Many of the brightest minds coming out of our graduate schools and teachers' colleges have been infected with continental rot: denying truth, denying objective reality, rebuking education as a domination, and writing what Camille Paglia has well termed "pretentious, labyrinthine junk."

    The test scores tell us something; the experience of facing students and interacting with colleagues tells us something else. Let's put the data together before articulating premature conclusions.

    Jeff Zorn

    Peter Schrag asserts that the well-known decline in SAT scores "occurred chiefly because a larger percentage of lower-ranking students (those from the bottom half of their school classes) began taking the test." His finding is contradicted by a November, 1991, Atlantic article by Daniel Singal, "The Other Crisis in American Education." Singal found a very substantial decline in the number of students earning high SAT scores -- thus the average scores declined not so much because more low scores were included but because there were fewer high scores. Thomas Sowell's Inside American Education (1993) reported the same: "In reality, however, SAT scores declined at the top, not because there were more low scores averaged in. More than 116,000 students scored above 600 on the verbal SAT in 1972 and fewer than 71,000 scored that high ten years later." Singal and Sowell are supported by the 1986 edition of the Center for Education Statistics' The Condition of Education.

    Schrag appears to have drawn most of his information from a report written by staff members at the Sandia National Laboratories in 1991. The report was originally commissioned by Energy Secretary James Watson but was later "suppressed" by the Department of Energy and the Bush Administration.

    Despite the otherwise sound reputation of the Sandia National Laboratories, subsequent analysis of the Sandia report indicates that it was in error with respect to the SAT decline. A respected scholar -- Professor Lawrence Stedman -- concluded that the decline noted by the Sandia staff resulted not just from the presence of more low scores earned by low-ranked students but from declines in the scores earned by students at all levels. Moreover, Stedman reported in a peer-reviewed educational-research journal that the Sandia report was "seriously flawed by errors in analysis, insufficient evidence, mischaracterizations of the international data, and a failure to consider the evidence that U.S. students are performing at low levels."

    J. E. Stone


    Jeff Zorn's points, while generally correct, could as easily have been made two generations ago, when any student in my eighth-grade class at P.S. 89 who worked too hard or raised his hand too often -- this in an all-white class -- was quickly labeled a "fruit" or an "A.K." (for ass-kisser) by his classmates. Student postures in many schools were languorous and aloof and sometimes contemptuous, though since no statistics are kept on such things, we can't know whether things have gotten worse or better. And although I agree that many teachers (particularly in higher education) have recently been infected by "continental rot," the proud know-nothing ignorance (about math or foreign affairs or history) of many of their predecessors of the golden age -- not to mention the frequent invocation of ethnic stereotypes, if not outright prejudice -- hardly commends them as models.

    As for the SAT, the last time I checked, average scores on the math test were higher than they have been at any time in the past generation; and although the number of scores over 600 on the verbal part was lower in 1982 than it had been a decade earlier -- as was the total number of test takers -- average total (math plus verbal) SAT scores for each fifth of the high school graduating class have crept up steadily since the mid-1970s, which is when such breakdowns were first reported. As to "mischaracterizations" of data, international and otherwise, there are plenty on all sides of this issue, and hardly space in this whole magazine for a full discussion.

    Reading Wars

    Nicholas Lemann's "The Reading Wars" (November Atlantic) assumes that phonics gives us a reliable way to teach reading -- and writing. Unfortunately, it does not. If it did, whole-language would never have gained a foothold.

    If English phonics were as regular and logical as that of other alphabetically written languages, anything as rash as whole-language would have been laughed out of court. Imagine trying to persuade a Spanish, Italian, or German teacher, or a teacher of any other language, to teach reading by means of whole-language!

    English phonics gives us trouble because its forty-four phonemes are spelled in more than 400 different ways. Even though the seventy most common spellings of our forty-four spoken sounds will spell about 85 percent of our words correctly, we are still obliged to remember which spelling of a particular sound goes with which word. Consider the very common ee and ea spellings of the long - e sound. You must remember that "seed" is spelled s-e-e-d, but that "seat" is not spelled s-e-e-t. One whose faulty memory spells "sweet" s-w-e-a-t (like "seat") has written a word that rhymes with "wet."

    English phonics per se -- unaccompanied by exhaustive memory of its irregularities -- is a complete hodgepodge. For the one in five who has difficulty remembering our countless spelling inconsistencies, today's computers can be made to hammer the baffling irregularities into memory.

    Extensive research confirms that immediate correction of an error sends much stronger signals to memory than a delayed correction. Computers can now overcome what has always been the vulnerable part of phonics teaching -- the flaw that opened the door to whole-language.

    Edward Rondthaler

    The Zero Controversy

    The criterion for choosing a numbering scheme for years is convenience, not that "zero is entitled to all the rights and privileges of the other integers" (Dick Teresi's reply, Letters, November Atlantic). For me, the single biggest inconvenience with the current scheme is in converting between a century number and the corresponding years. So as an experiment I have invented the "hectury" (from the Greek "hect-" for 100), which is an interval of 100 years that simplifies the conversion. For example, the nineteenth hectury comprises the years 1900 through 1999. Here is a conversion table:

    As this table shows, fixing one problem causes others. Now there is a year zero. Just as bad, there is a zeroth hectury. Worse yet, the simplified conversion rule for hecturies doesn't work for years B.C.

    Computer programmers are among those who count from zero. In the C programming language, arrays are numbered from zero. Thus the elements of an array having three elements are numbered 0, 1, 2. Forgetting this convention is a frequent cause of errors known as "off-by-one errors." These errors are especially common among programmers who switch from using the Pascal programming language to using C, because Pascal array elements are usually numbered beginning with one instead of with zero.

    The number lines accompanying Teresi's reply would be even more useful if they showed the negative numbers to the left of zero. They would then show that the Bede-Dionysus numbering scheme is less consistent in assigning a number to a one-year interval than the Cassini scheme. The Cassini scheme always assigns the integer that is at the right endpoint of a one-year interval, where the Bede-Dionysus scheme does so only for intervals to the right of zero. For intervals to the left of zero the Bede-Dionysus scheme switches to using the left endpoint of each interval. By the way, mathematicians commonly use the word "interval," whereas Teresi uses "increment," for a set of points between a pair of endpoints.

    Steve Tyler

    Illegal Whaling

    Mark Derr's article on whaling contains the statement that "abundant evidence exists that illegal whaling is a major problem" ("To Whale or Not to Whale," October Atlantic). This raises some questions: What are Derr's criteria for illegality, to whom are they applied, and what is the evidence?

    I am unaware of any current evidence of "illegal whaling," although of course there has been much of it in the past, including the recently admitted-to illegal catches by the former Soviet Union. Only thirty-nine states are members of the IWC, and so far as I'm aware, none are violating the ICRW by catching whales contrary to their obligations under that treaty. The current catches by Norway and Japan are allowed under different provisions of the treaty. Other catches under the convention are by aboriginal whalers and also specifically allowed under the schedule. Any nonmember state can take whales lawfully unless it has violated restrictions under another agreement. Most of these nonmembers accept the 1982 Law of the Sea treaty under which taking whales is perfectly lawful. Very few nonmembers of the IWC target and take whales.

    William T. Burke


    William T. Burke, an expert on the law of the sea, has argued that the Southern Ocean Sanctuary, established by the International Whaling Commission to protect whales from any hunting in that region, is illegal. Japan is the only IWC member to have voted against the establishment of the sanctuary and to subscribe to Mr. Burke's opinion. Monitoring of whaling activities is weak to nonexistent, but what evidence does exist raises alarm in many quarters. For example, in recent years Japanese customs inspectors have seized whale meat in excess of 700 tons (representing forty to eighty-five whales) from Russia, South Korea, and Taiwan. Norwegian customs inspectors have also seized whale meat, mislabeled for illegal export. Whether any of this meat was taken "legally" is doubtful, but in any event the trade violates the Convention on International Trade in Endangered Species (CITES), which restricts trade in all large whale species. Although the "current catches" of Norway and Japan may be "allowed," export of that meat is not, raising ethical and legal questions about the purpose of the hunts. Without enough international inspectors and safeguards, judging the precise extent of the problem remains difficult, but whether the whaling is sanctioned by governments or conducted by individuals, it appears significant to many observers.

    The Atlantic Monthly; February 1998; Letters; Volume 281, No. 2; pages 8-12.

  • Years

    2000 - 2099

    1900 - 1999

    1800 - 1899

    1000 - 1099

    900 - 999

    100 - 199

    0 - 99

    100 - 1 B.C.

    200 - 101 B.C.