Watch Live: The Washington Ideas Forum 2014

Letters to the editor

Iraq and the Insurgency

The main premise of James Fallows’s article “Why Iraq Has No Army” (December Atlantic) is that “America’s hopes today for an orderly exit from Iraq depend completely on the emergence of a viable Iraqi security force.” Fallows proceeds to present a tightly argued case for why this force has not come into being and details the policy changes that would be required to achieve such a force.

The premise itself is never defended even though a compelling counter-premise has been put forth by highly knowledgeable individuals and appears to be built into the new Iraqi constitution, which Fallows does not discuss at all. The counter-premise is that security and the avoidance of civil war have a much better chance of being achieved through the continued development of largely autonomous regions with virtually independent militias. This alternative was discussed very cogently in an article by Ambassador Peter Galbraith in The Washington Post on November 7.

In actuality Fallows’s article indirectly gives weight to the counter-premise of regional militias because he shows clearly the overwhelming risks, costs, and time required “to bring an Iraqi army to maturity.”

Robert W. Raynsford, Ph.D.
Deputy Assistant Secretary of the Army (Retired)
Washington, D.C.

James Fallows writes, “In Japan, Germany, and South Korea … none had an insurgency aimed at Americans.” This may be true for Japan and Germany (although the German case is somewhat ambiguous), but post–World War II South Korea most definitely had “an insurgency aimed at Americans.” Partly but not entirely backed by North Korea after 1946, the anti-American insurgency began in the fall of 1945 and continued until the North Korean invasion of June 1950. Assassinations, ambushes, and armed clashes cost the lives of thousands of Americans and their South Korean supporters, and countervailing attacks by the emerging Syngman Rhee government could be extremely brutal. The worst violence occurred in April of 1948, when some 10,000 suspected Communist insurgents were killed on Cheju Island, one-third of the island’s entire population.

In many ways the U.S. occupation of South Korea from 1945 to 1948—poorly planned, incompetently executed, underequipped, and lacking long-term resolve—was much more like the current Iraq imbroglio than the occupations of Germany and Japan.

Charles K. Armstrong
Director, Center for Korean Research
Columbia University
New York, N.Y.

I  picked up the December issue at the   airport because the article “Why Iraq Has No Army” by James Fallows caught my eye. Before I got to it, however, I read the Wall Street Journal editorial that mentioned the article and asserted that Mr. Fallows had not only never visited Iraq but had never interviewed anybody in either the U.S. or the Iraqi governments prior to writing it. After reading that, I decided not to invest the time in reading the article. If I want unknowledgeable anti-American propaganda, I can watch the network news.

Bob Bronson
Naperville, Ill.

James Fallows replies:

Regarding Robert Raynsford’s discussion of regional autonomy: Whether Iraq should emerge as three functionally independent regions, or even as three separate countries, was one of many topics I deliberately omitted from this article, so as to concentrate on America’s announced goal of creating an Iraqi security force representing all regions and all religious and ethnic groups. Personally, I view de facto regional independence as nearly inevitable, with both good and bad consequences. But the explicit policy of the United States is to foster the growth of a government that claims the loyalty of all Iraqis, and a military loyal to that government. That ambition is what I examined in my article.

Charles Armstrong is right to point out that the situation in Korea after World War II was very different from those in Germany and Japan. Germany and Japan mounted no serious resistance to their American occupiers, principally because they were utterly defeated socie- ties after many years of all-out war. Koreans suffered violent conflict for eight more years—from September of 1945, when the Soviet Union and the United States divided the Korean Peninsula into spheres of influence north and south of the 38th parallel, through the end of the Korean War, in the summer of 1953. During the first few years of that time, as Armstrong writes, American forces were engaged in what could be called either civil war or counterinsurgent warfare, born mainly of their effort to prop up the Syngman Rhee regime in the South. The reason I mentioned Germany, Japan, and South Korea as a group is that in all of them the United States has maintained a large military presence for many decades, which it would not have done if its troops had been the objects of continuing attacks.

About The Wall Street Journal’s editorial page: the allegation that Bob Bronson mentions was and is preposterous. The Journal’s editorial cited an unnamed source in the Multinational Security Training Command in Iraq—the organization responsible for training Iraqi troops—and claimed that I “didn’t even contact them while reporting the article or at anytime during at least the past nine months.” That is flatly untrue. I interviewed many members of that organization (among other civilian and military officials I spoke with), including its then commander, Lieutenant General Dave Petraeus, and his deputy. The Journal’s editorial writers would have known this claim was false had they checked with me or this magazine before publishing, which they did not do. Indeed, they would have known this if they had even looked at my article before criticizing it, since it contained lengthy quotes from Petraeus and an explanation of which interview requests the Pentagon press office had approved and denied. Mr. Bronson would have known all this if he had seen the next day’s issue of The Journal, which published a retraction of the false claim.

Is God an Accident?

It is gratifying to have as esteemed a   scholar as Paul Bloom approvingly refer to my research, as he did in “Is God an Accident?” (December Atlantic). But the provocative title and a number of remarks in the essay might lead readers to believe that scientific accounts of religious beliefs, such as the sort he and I both embrace, undermine the truth of religious belief. I would hate for readers to misunderstand the relationship between the science of belief and belief itself.

To use science to attack religion in this way is misguided and ultimately undermines our confidence in science even more than our confidence in religion. If religious belief is only a byproduct of our naturally selected minds having produced no direct fitness benefits in our evolutionary past, so too are a host of scientific beliefs, including the belief in natural selection itself. This observation leads to an uncomfortable problem for the anti-theist. If our brains (and the thoughts they generate) have arisen only because of their ability to produce survival-related behaviors and not Truth, how can we trust them to tell us the truth about such matters as, say, natural selection? The anti-theist must construct an argument to justify trusting his or her own mind, which could be in the midst of producing “accidental” thoughts and beliefs while constructing the argument! Such an argument, too, must consider the huge psychological literature detailing how human minds systematically get things wrong—from visual perception to higher-order reasoning—apparently to assist in our survival.

Even embracing an evolutionary account of religion, the theist may skate through this epistemological train wreck by insisting that a deity has orchestrated evolution to produce minds that can be trusted to produce true beliefs (at least under certain conditions). Perhaps the deity fine-tuned the nature of the universe from its origin so that our minds—capable of truly knowing the deity—would be inevitable. Or perhaps the deity directed just the right “random” mutations that natural selection then chose, which eventually produced our minds so that they could know Truth.

The point is that the theist may choose to believe in a deity and evolutionary or cognitive scientific accounts of religion without a conflict. The anti-theist’s determination to undercut religious belief via evolution may force abandonment of science itself. If, as Bloom suggests, religion and science will always clash, the blame lies not on the theist but on the anti-theist.

Justin L. Barrett
Institute for Cognition and Culture
Queen’s University
Belfast, Northern Ireland

P   aul Bloom draws upon a voluminous body of research, much of it his own work on the development of understanding of self, to construct a wholly plausible argument that a dualist approach to understanding the world is built into humans as a genetic—or more probably epigenetic—adaptation that has served human societies well, and that this dualism has specific consequences for widespread belief in God.

There are several subtleties in the argument that deserve closer scrutiny. Bloom teeters on the edge of the common trap of discussing science and religion as though they are monolithic. As his tour around various belief systems shows, what we term “religion” encompasses a breathtakingly wide spectrum of beliefs.

Similarly, cosmologists, synthetic chemists, and social psychologists approach their crafts in incredibly diverse ways. While there are certainly hot spots in the relationship between science and religion, exemplified by the clash between evolutionary biology and a particular fundamentalist interpretation of Christianity, the vast majority of what is addressed by each camp has no bearing on the other. In the end the epigenetic-adaptation argument advanced by Bloom is no more compelling than the existence of compound eyes or flagellar motion in bacteria advanced by the intelligent-design community as evidence for the existence of God. Perhaps it is time to calm the shrill voices on each side and recognize that the existence problem is, within the context of what we understand science to be capable of, formally undecidable, and therefore is properly within the realm of faith alone.

Paul W. Bohn
Centennial Professor of the Chemical Sciences
University of Illinois
Urbana, Ill.

As an unbeliever resigned to the   necessity of religion, I appreciate Paul Bloom’s article proposing a source for that mysterious necessity. However, some comments may be in order.

That we have evolved to be creationists may be saying too much. The rural Thais with whom I spend much of my time are not much concerned with the origins of life, and certainly have no commitment to any given explanation or story. Rather, they are concerned with the multitude of spirits—of the dead, of trees and streams, etc.—that must be clothed, fed, placated, exorcised. Similarly, both urban and rural Thais are not much concerned with the meaning of things. Often there is no articulate meaning, beyond “It’s pretty.” When pressed, Thais make something up, resulting in conflicting explanations for the same thing. This would seem to call into question Bloom’s inborn tendency to impute purpose where there is none. Still, as Bloom predicts, there seems to be a tendency to impute a designer to the overall scheme of things.

Finally, Bloom need not have worried that Buddhism constitutes a counter-example to his thesis. “While it may be true,” he writes, “that ‘theologically correct’ Buddhism” rejects belief in body-soul duality and supernatural beings, “actual Buddhists believe in such things.” Indeed they do, but “theologically correct” Buddhism also believes in life after death, ghosts, deities, heavens, and hells. These beliefs are central components in the doctrine of every traditional variety of Buddhism. The rational Buddhism of which so much has been made by Western humanists is a creature of their highly selective readings of Buddhist scripture.

Stephen Evans
Mahachulalongkorn Buddhist University
Bangkok, Thailand

A trend is apparently emerging in  modern thought, which we might call “accidentalism”: everything important is an accident. The accidental human, treading his accidental Earth, stabilized by an accidental moon, whirls through the accidental universe as he worships his accidental god in the language of his accidental mind. It is at least a consistent paradigm. But it is not an explanation, and is hardly more appealing than any other non-explanations (divine agency, to pick one) we might put forward. And if it is not exactly intellectual bankruptcy, it approaches intellectual penny-wisdom and pound-foolishness.

Nelson Hoffman
Los Alamos, N.M.

Paul Bloom replies:

Much of my argument that belief in God is an evolutionary accident is based on Justin Barrett’s important research, and I’m glad he took the time to express his more general views on religion and science. He notes that the human capacity for science is also likely to be an evolutionary accident, and concludes from this that we have no reason to trust our scientific beliefs. For Barrett, the only way we can ever be sure that anything is true is to trust that God wanted us to have true beliefs and has orchestrated our evolution with this goal in mind.

But there is a better reason to take science seriously. Yes, our intuitions and hypotheses are imperfect and unreliable, but the beauty of science is that these ideas are tested against reality, through observation, prediction, and experiment. The reason to be confident that the Earth revolves around the sun, for instance, does not come from evolution or theology. It comes from the discoveries made by astronomers.

I do agree with Barrett that scientific accounts of the origins of religious beliefs do not necessarily undermine the truth of such beliefs, but I’m less sanguine about the relationship between science and religion more generally. The problem is that religions consistently make claims—about the age of the Earth, the nature of mental illness, the origins of species, the nature of consciousness, and so on—that turn out to be wrong. This clash is not inevitable; one might choose to hold supernatural beliefs that can never be proven wrong, as Barrett does with regard to evolution, or one can restrict religion to statements about value and give up on statements about reality, as Stephen Jay Gould proposed. But neither approach corresponds to religion as it is practiced and understood by most of the human race.

Stephen Evans provides an interesting example of people who do not believe in an afterlife or in a specific account of divine creation. As I discussed in my article, such people clearly exist, but they are the exception: supernatural belief is the universal default. This claim is nicely supported by Evans’s point that Buddhists hold precisely the same sorts of dualist views as Christians.

Finally, both Nelson Hoffman and Paul Bohn have interesting things to say, but I am mystified by their pessimism. Regarding Hoffman’s concerns, no scientist thinks that everything is an accident. In particular, many aspects of the human brain display complex, adaptive, and non-accidental design, and so are plausibly viewed as biological adaptations. Bohn starts by describing my account as “wholly plausible” but then ends up putting it in the same category as intelligent-design arguments for the existence of God, an analogy that probably isn’t meant to be flattering. In fact, my proposal might well be wrong, but it is a psychological theory of the usual sort, based on the same experimental methods that have been used to study other aspects of mental life. The question of why people have supernatural beliefs is just another scientific problem—though, admittedly, an unusually interesting one.

College Admissions

As a firsthand observer (our only child started college this fall), my view of the college admissions process is different from Ross Douthat’s (“Does Meritocracy Work?,” November Atlantic). From his perspective, my daughter and her classmates represent a naked display of power and privilege: graduates of an expensive prep school, many of them are attending the best colleges in the country and paying full freight. But a closer look at the situation reveals an admissions success story.

Overwhelmingly, the parents of my daughter’s classmates were raised in modest circumstances and then hoovered up by the great meritocracy machine. They attended good colleges, worked hard, stayed married, bugged their kids to study more and party less, and saved enough money to pay exorbitant tuition bills.

Should their children be swept aside in favor of less-talented kids just because their parents played by the rules and climbed the ladder of success? I don’t think so. In the coming decades America will need the very best leaders to overcome the daunting problems that it faces, and they should be recruited from every income class, even the higher ones.

Peter K. Clark
Lafayette, Calif.

Ross Douthat replies:

Peter Clark’s concerns are understandable, but no one is advocating that smart, hardworking students from upper-income families be “swept aside” in order to broaden poor students’ access to higher education. It’s true that any kind of class-based affirmative action would necessarily make some well-off students marginally less likely to be admitted to their first- or second-choice school. But there’s very little evidence that a student is seriously disadvantaged later in life by attending Skidmore rather than Stanford, or Colgate rather than Cornell. And American higher education contains so many good schools that, as James Fallows and V. V. Ganeshananthan have pointed out in The Atlantic, nearly every applicant can find “a school that fits his or her skills, needs, and interests.” In addition, many excellent schools are actively competing for high-scoring students from high-income families, showering them with merit aid in the hopes of enticing them to attend. Given these advantages—as well as the preferences that schools already offer legacies, athletes, and minorities—it seems only fair to provide a few more advantages to the disadvantaged as well.

In “The Best Class Money Can Buy”   (November Atlantic), Matthew Quirk labels enrollment managers as extortionist—concerned only with squeezing more money out of students to maximize revenue and rankings. Any attempts at objectivity pale in comparison to his biased metaphors.

Quirk’s article is, however, a fairly accurate portrayal of enrollment management—ten years ago. I was dean of enrollment at Johns Hopkins when it found itself on the front page of The Wall Street Journal for even considering denying competitive aid to students who would enroll anyway, and I have seen firsthand the changes in enrollment management over the past decade. In the early 1990s, as my colleagues and I experimented with new enrollment-management techniques as they applied to financial aid, it became increasingly evident that unless we used these tools in a way that was consistent with institutional vision, our work would not survive in the long term. Most of us—not some, as Quirk would have us believe—have adjusted accordingly.

Since 1999, our enrollment-management approach at Dickinson College—clearly aligned with the college’s mission—has done just that, and it is working. We have tripled our enrollment of first-year students from underrepresented groups, halved non-need-based aid, and increased average SAT scores by 100 points. It would be hard to argue that our strategy, especially in increasing enrollment of students of color and decreasing non-need-based aid, has benefited the college at the expense of the students.

Institutional financial aid is not an entitlement; it is an investment in students who in the aggregate will benefit from and contribute to the college, and we must all be thoughtful and, yes, strategic in the awarding of it. To that end, the independent colleges in Minnesota and Pennsylvania have been studying this issue and are committed to finding ways, within legal limits, to address the imbalance.

Aggressive use of non-need-based aid meant to influence students’ enrollment decisions will drive up college costs in the long run and further threaten access if we do not work collectively to stop it. If Quirk’s article helps to raise this awareness, it will have served its purpose—in spite of its inflammatory and unbalanced perspective.

Robert J. Massa
Vice President for Enrollment and College Relations
Dickinson College
Carlisle, Pa.

Matthew Quirk replies:

Over six months and many hours of conversation with the leaders of the enrollment-management profession I was told time and again that the industry, as a whole, is only beginning to face up to its role in keeping poor students out of higher education, and, one hopes, make amends. Mr. Massa and Dickinson College are a case in point. He says that much has changed since his days at Johns Hopkins, when he was devising plans to cut aid for prospective students who, by visiting campus, showed they were eager to attend. During his tenure at Dickinson, however, the number of low-income students has fallen year after year.

His increased enrollment of minorities is laudable, but his cuts in non-need-based aid are not as charitable as he suggests. Like many schools in the mid- to late 1990s, Dickinson had been offering huge discounts (averaging over 50 percent of tuition) just to fill seats. Under Mr. Massa, the school found that it could cut costs and rise in the rankings by reducing non-need-based financial aid and shifting it to the top 10 percent of its class. Dickinson switched from paying too much for average students to buying the very best, but, as so often happens, poor students appear to have been left out of the equation.

Preempting Iran

I was shocked by Terrence Henry’s  article (“The Covert Option,” December Atlantic) and his cool suggestion that the United States should engage in a policy of targeted assassinations to slow down Iran’s nuclear program—or at the very least encourage Israel to do so. While I’m no fan of the Islamic Republic, I was under the naive impression that assassination as a tool of foreign policy is not only a violation of international law but has also been explicitly forbidden by previous U.S. presidents. Moreover, Israel’s policy of assassinations, euphemistically referred to as “targeted killings,” is illegal under international law and has received widespread international condemnation.

It may be the case that all moral and legal restraints have been swept away by the Bush administration and that the prohibition of assassinations has joined the Geneva Conventions and the ban on torture as “quaint notions.” However, for a respected publication like The Atlantic to seemingly endorse the practice is unconscionable.

Karim Pakravan
Northbrook, Ill.

Terrence Henry replies:

Nowhere in my article did I “coolly suggest” or “seemingly endorse” the use of targeted assassinations by the United States to slow down Iran’s program, much less encourage Israel to do so. Rather, I pointed out the documented history of such programs in the past, and speculated as to whether or not they might occur today, while noting that an assassination campaign like the one carried out by Israel in the 1960s against Egypt and later against Iraq involves serious moral and political considerations. Karim Pakravan’s assertion that such an assassination campaign would be “explicitly forbidden” by the actions of previous U.S. presidents is not entirely accurate. While Executive Order 12333 (signed by President Reagan in 1981) and its predecessor, Executive Order 11905 (signed by President Ford in 1976 after several embarrassing assassination attempts against Fidel Castro came to light) prohibit political assassinations by U.S. personnel, each president has the right to modify or nullify them, which at times they have. President Reagan authorized a missile strike against Muammar Qaddafi in 1986, the first President Bush ordered strikes against Saddam Hussein during the Gulf War, and President Clinton directed them against Osama bin Laden in 1998. Such decisions are a matter of presidential policy, not law, and just as the current administration has seen fit to disregard these executive orders when going after terrorist groups, it could very well decide to do the same with Iran.

Declare War

Leslie H. Gelb and Anne-Marie Slaughter (“Declare War,” November Atlantic) argue for “a new law … requiring a congressional declaration of war in advance of any commitment of troops that promises sustained combat.” Their most compelling argument is the painful litany of leadership failures at the highest levels of the executive branch.

These failures occur because of poorly defined or altogether forgotten missions, which in turn result from the repeated error of confusing freedom with democracy, resulting in dull-witted efforts at nation-building. What is not clear is how involving Congress could possibly result in any improvement.

Would not Congress argue endlessly about the meaning of the phrase “that promises sustained combat”? With the advent of nuclear weapons, the nature of war has changed. A-teams and HALO drops will not lend themselves to congressional interference and debate, which partisan politicians are sure to attempt. Nor would the ultimate horror of a rogue state preparing to use its nuclear weapons.

The opportunities for the rigorous fact-finding and thoughtful debate the authors propose will be both brief and rare, and the risks the authors acknowledge too great to contemplate. Will a nuclear-armed rogue state, with known plans for a preemptive strike, courteously wait until our debate is finished before pushing the launch button? Will we ignore ongoing genocide while we argue over the meaning of the word “sustained”? The devil is in the details.

No law, even one so thoroughly based in the Constitution, can ever take the place of great leadership in times of strife, or turn self-serving politicians into statesmen and stateswomen. Perhaps the matter is hopeless—at least until we begin to think about term limits.

Don Garland
Arden, N.C.

I   agree wholeheartedly with Leslie  H. Gelb and Anne-Marie Slaughter that the United States needs a Congress that will reassert its war powers and put an end to undeclared wars. A sobering declaration of war should be the first step in supporting future American soldiers. This mechanism is also the only practical implementation of a “just war” policy that we are ever likely to see.

Sadly, the proposal of a law to force senators and representatives to follow the Constitution in this matter is an obvious nonstarter. The solution is the ballot box. Sorry, Hillary Clinton. Sorry, John McCain. I will be looking for other candidates to support.

Jim Wolfe Wood
Stillwater, Minn.

Leslie H. Gelb and Anne-Marie Slaughter are correct: The time has come for Congress to reassert control over the deployment of troops. Perhaps an economic incentive would force Congress to thoroughly vet a future president’s decision to wage war.

Specifically, I propose a war tax that would be automatically imposed when Congress approved the use of force and automatically repealed once the reason for congressional authorization was addressed. Such a tax would force legislators to recognize that a vote to authorize the use of force would place a burden on all their constituents, not just those serving in the military. This awareness would most likely compel Congress to demand a clear strategy, timetable, and exit plan in advance.

Another benefit of a war tax would be that the revenue generated could be directed exclusively toward increasing military pay. This would give our all-volunteer military the financial resources to recruit qualified individuals in the midst of a hot war.

Paul Poast
Department of Economics
Ohio State University
Columbus, Ohio

Leslie H. Gelb and Anne-Marie Slaughter reply:

The letters all raise thoughtful and useful points of skepticism, which we ourselves expressed. Two points should be clear, however. First, nothing we propose would stand in the way of the president’s power and need to respond in emergencies. Second, the alternative to doing something along the lines we propose—an effort to take the business of going to war more seriously—is the continuation of a process careless of lives and country.

Lolita

I  was with Mr. Hitchens most of the  way through his masterful and thought-provoking return to Lolita, the book and the girl (“Hurricane Lolita,” December Atlantic). I sipped my milky, hot coffee and sat in the sun, almost purring. His clear writing was an antidote to the slumber party of twelve-year-old girls I had just weathered, and I can guarantee there was nothing sexy about any one of them: orange ink spills on the carpet, generalized shrieking, chomping on Pringles, “ewww”ing to the strains and pornographic snippets of “My Humps” or some such “song.”

I was reminiscing about the Lolita movies, too, as I read along—the sexual indecisiveness of James Mason and the vulnerability of Jeremy Irons (who I doubt could or would seduce a willing flea, at least in that anemic character he inhabited). It’s really the men I focused on in these films; the girls were purely uninteresting—to quote my old English professor, like the creamy surface of a yogurt cup before you dig in.

I jarred to a stop, though, upon reading Hitchens’s line, “Arresting, as well as disgusting, to suddenly notice that Lolita … would have been seventy this year.”

Why are old women so knee-jerkingly “disgusting?” I am telling you that if Humbert Humbert had not thrown a clot in prison, and if he had made it like so many singly driven men to age 100, he would have still been moved by his seventy-year-old Lolita and found “that I loved her more than anything I had ever seen or imagined on earth or hoped for anywhere else.” Obsession is obsession. Age has nothing to do with it. He was not a generalized pedophile, for my money, or he would have horned onto her friends at every turn. He was in love. Charles and Camilla—hello?

Mary Jane Gore
Charlottesville, Va.

Advice & Consent

Matthew Quirk’s list of rapidly reconstructed cities (“Cities Rising,” December Atlantic) suggests that agonizing over what new form New Orleans should take is pointless, since it will just be replaced in kind anyway. But the rebuilding imperatives that motivated each of the examples on Quirk’s list are absent in New Orleans. The thriving mercantile cities of Chicago, San Francisco, and Galveston, Texas, were destroyed during the great era of American city-building, and the economic pressure to rebuild them was enormous. Warsaw, Poland, and Tangshan, China, were rebuilt by totalitarian regimes, each for its own reasons. It is not clear that any such imperative animates the rebuilding of New Orleans, other than perhaps a moral one, which is ambiguous at best. (Does it really make sense to put low-income residents at risk again?)

American cities are in a Darwinian struggle for survival. Those that can attract knowledge workers and tourists are prospering, and those that cannot are shrinking. Other than the venerable French Quarter and the Garden District, which are situated on higher ground and consequently were not flooded, much of New Orleans was on the losing end of that survival struggle before Katrina. In the absence of private initiative to rebuild it, only the federal government could marshal the necessary resources. But with Republican voting power in suburban and rural areas, no national urban policy, and a mounting federal deficit, it seems unlikely that anyone can muster the necessary political will.

Matthew J. Kiefer
Jamaica Plain, Mass.

The study highlighted in Primary Sources (December Atlantic) on washing of hands in public restrooms fails because its design is flawed and the data are polluted. Some subjects who ordinarily do not wash their hands were shamed into doing so by the presence of the investigator, tally sheet in hand.

I am among those who do not wash their hands after urinating. Why should I? I don’t urinate on my hands. Furthermore, T. E. Lawrence, who should know, says camel’s urine is antiseptic. Many were the wounds he cleansed in wadis scattered over the Arabian Desert. Why not human piss? Captain Bligh and his loyal crewmen drank urine on their epic sail to safety.

It makes more sense to wash your hands before urinating. Washing your hands after urinating is fetishism.

Charles Perrone
Moorestown, N.J.

P .  J. O’Rourke’s beautiful piece on the new Airbus 380 (“The Mother Load,” November Atlantic) has one minor error: The previous record-holder for takeoff weight is not the Antonov An-124 (similar to our Lockheed C-5 Galaxy) but its beefed-up six-engine version called the An-225. Only one was built before the Soviet Union dissolved in 1991, and it still flies under the flag of the Russian Federation.

Arthur C. Segal
Springville, Ala.

P. J. O’Rourke replies:

No doubt Arthur Segal is correct about the An-225. On the other hand, we both may be wrong. I have just returned from a holiday accompanied by my wife, two daughters, baby, portable crib, car seat, fifteen suitcases, eight pieces of carry-on baggage, and all the toys the grandparents gave the kids for Christmas. The Boeing 737 upon which we traveled—that may be the heaviest airplane to ever take off.

Presented by

Things Not to Say to a Pregnant Woman

You don't have to tell her how big she is. You don't need to touch her belly.

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

Things Not to Say to a Pregnant Woman

You don't have to tell her how big she is. You don't need to touch her belly.

Video

Maine's Underground Street Art

"Graffiti is the farthest thing from anarchy."

Video

The Joy of Running in a Beautiful Place

A love letter to California's Marin Headlands

Video

'I Didn't Even Know What I Was Going Through'

A 17-year-old describes his struggles with depression.

Video

Google Street View, Transformed Into a Tiny Planet

A 360-degree tour of our world, made entirely from Google's panoramas

Video

The Farmer Who Won't Quit

A filmmaker returns to his hometown to profile the patriarch of a family farm

Video

Riding Unicycles in a Cave

"If you fall down and break your leg, there's no way out."

Video

Carrot: A Pitch-Perfect Satire of Tech

"It's not just a vegetable. It's what a vegetable should be."
More back issues, Sept 1995 to present.

Just In