Milton Viorst would like us to think that Salafism—one of the most rigid forms of Islam—isn’t all that extreme, and that Ali al-Timimi, the most devout and spiritual of young men, was able to separate the “religious” from the “political” (“The Education of Ali al-Timimi,” June Atlantic). No mean feat, since Islam itself recognizes no such division. He would also like us to know that, although affiliated with Wahabbism, the radical Islam of the Saudis, al-Timimi had absolutely no interest in waging violent jihad. In fact, insists Viorst, taking the word of al-Timimi’s friend, he was more intrigued by the concept of ijtihad—the desire to revise Islamic law.
But al-Timimi’s words, quoted at length in the article, point us in a completely different direction. Rather than wanting to “reform” Islam, he seemed eager to assert its primacy over all other religions, and to see sharia—Islamic law—take precedence over man-made law. How could he not, committed as he was to the tenets of Salafism?
Mindy G. Alter
I last sat with Ali al-Timimi shortly before the announcement of the verdict in his trial. He had shown up unexpectedly to attend the celebration of a newborn child in our community. We were joined by another Muslim, like myself an American professional converted to Islam in midlife. We shared lamb and rice, and al-Timimi spoke softly with us, shrugging off speculation regarding the impending outcome of his trial. Certainly, he wanted to return to his studies and to teaching, to his work and to his family, but he was pleased with whatever his Lord had chosen for him. He politely indulged our questions and concern, yet repeatedly turned the conversation from the trial to his worries over how America had changed and the great difficulties he saw for all Americans. The America of his youth, spent in part in the home of Milton Viorst, was, he feared, much threatened.
My first contact with al-Timimi had been ten years earlier. Though I had been Muslim for only two years then, I was familiar with the thinking and teaching of many in the Muslim world. As the voice of al-Timimi poured forth from a cassette of one of his lectures, I realized that here was something different. I could not find the posturing of the mystic, the rant of the fanatic, the self-indulgent intellectual ramblings of the freethinker. Nor could I find politics, current events, or obsession with conflict. Rather, the impression was of a tongue struggling in vain to get the knowledge out—pure religious knowledge. It was an ecstatic experience, a treasured memory, and, as it turned out, one shared by many in the English-speaking Muslim community.
A fair and impartial jury found unanimously that the United States proved all the charges against Ali al-Timimi beyond a reasonable doubt. The verdict was based on relevant, reliable, and overwhelming evidence of his guilt. This evidence was presented in the course of a trial at which al-Timimi was represented by vigorous and skilled attorneys. The trial was presided over by a fair and impartial judge—who later upheld all the verdicts against all of al-Timimi’s challenges. If The Atlantics readers had seen and heard the evidence introduced at trial, they would have a very different view of this case and of al-Timimi than what Viorst depicted. That is precisely why these decisions are left to juries, guided by judges, rather than to journalists.
U.S. Attorney, Eastern District of Virginia
Milton Viorst replies:
Mr. Rosenberg reminds me of when my friend Barbara told her family, forty years ago, that my colleague Steve had proposed marriage to her, and her mother’s comment was, “Just a journalist?” Since then Steve and I, along with our wives, have often laughed at the line. Mr. Rosenberg makes me laugh again. Among the responsibilities of journalists is watching out for the excesses of public officials, from which injustice can result, as I think it did in the al-Timimi case. Mr. Rosenberg has his duties; I have mine.
Matthew Stewart confuses the shallow brand of management consulting he peddled with graduate business education (“The Management Myth,” June Atlantic). The vast majority of M.B.A.s do not aspire to a career in management consulting, nor are most business-school curricula laden with the pseudoscientific literature he so readily cites. His belief that a liberal-arts education is more fitting training for managers reflects his limited experience in the business world. Nietzsche isn’t going to cut it when you have to discount the cash flows of a complex capital budgeting problem, implement a new software system, or navigate the current thicket of employment law. For real problems, real skills are needed, though they may certainly be acquired in a venue other than graduate business school.
North Pomfret, Vt.
Matthew Stewart’s critique of case-study management education is based on a single case study: namely, himself. Since he studied philosophy and apparently succeeded in a consulting business, he argues that everyone else should follow the same path. But a sample of one proves nothing. Stewart also seems to have fallen into the trap of assuming that what goes on at the Harvard Business School is the quintessence of management education. But other leading business schools, such as Chicago and Stanford, stress basic disciplines and analytical techniques rather than case studies. Their rigorous versions of management education differ profoundly from the one that Stewart lampoons.
And given what passes for philosophy these days at most colleges, imagine the outcome if everyone followed Stewart’s advice: a whole generation of business managers thinking and communicating like Derrida, Foucault, de Man, and the rest of that deconstructionist, postmodernist gang. I am going to lay in a supply of earplugs.
Professor Emeritus, Accounting & Finance
University of New Hampshire
While dismissing the “core competencies” taught in M.B.A. programs as outdated and theoretical, Matthew Stewart demonstrates core incompetencies in the very skills he was supposed to have gained in such a program. The subtitle of the piece suggests the author will explore the correlation between an M.B.A. degree and “success in business.” Instead, Stewart focuses on just one career path and a handful of management theories. He claims to have applied statistical methodology in his study, but his hypothesis regarding the value of an M.B.A. education is tested with insights gained from summer reading and hearsay from former colleagues. Perhaps “sampling error” is not a buzzword the author learned in his former career. One hopes that the future armies of philosophers storming the world of business will bring sharper analysis to the front line.
Los Angeles, Calif.
As a professor of management I found Matthew Stewart’s critique of management research and theory to be a delightfully engaging but ultimately disappointing read. He is dead-on in his various criticisms, but eventually the cynic needs to come up with a better alternative than the one he is dissing. Stewart asserts, “Rousseau and Shakespeare said it all much, much better.” But what is the “all” that Rousseau and Shakespeare said better? Stewart does not give us much help, other than to note that humans seem to have a tendency to be ugly to each other as they engage in corporate versions of civil war.
I am not looking for a “three-point program,” but I would like to see Stewart do more than confirm what Dilbert has already pointed out with devastating clarity in hundreds of cartoons.
Howard Raid Professor of Business
Matthew Stewart replies:
Whether or not you believe your correspondent suffers from limited business experience, as Mr. Semple presumes, the fact remains that the management professions are rife with paradox: so-called experts who peddle faddish platitudes, consultants who offer advice on businesses about which they know little, and CEOs who reward themselves lavishly come good times or bad. To be sure, some of these paradoxes can and should be explained away. I’m still waiting.
Prof. Horrigan seems intent on rescuing the honor of management education by casting the Harvard Business School overboard—surely a drastic measure. Mr. Semple insists that success in business requires real skills, but then abandons the business schools altogether when he acknowledges that such skills “may certainly be acquired” elsewhere. He misses the mark when he dismisses consulting as being of interest only to an insignificant minority of business-school students. In a recent Fortune poll in which M.B.A. students were asked to name the sector in which they would like to work, consulting came in first, with 25 percent of the votes. Mr. Armitage has taken my claims to statistical rigor a little too seriously, and further seems to think that only a statistical-correlation analysis will yield answers about the worth of management education. While such studies have been performed and do have some value, the idea that they exhaust the analysis of the subject is about as fine an example as I can name of the drive for specious precision that characterizes so much of management thought. Personal testimonials and historical analysis have their limits too, but to dismiss them out of hand, as Prof. Horrigan does, as a “sample of one” is just pseudo-rigor.
Some of the confusion in the discussion stems from my decision to cover the distinct activities of management theory, management consulting, and management education in a single article. It would be a mistake to assign all the sins of the theorists to the business schools, as Mr. Semple notes, since the schools do more than just pass along the theory. My aim in treating all of these activities together, however, was not to pass a single judgment on all, but to examine the way the fundamental idea of “management” manifests itself in each.
My critics do score some points against academic philosophy—points that I willingly concede. I, too, would lay in a set of earplugs, if, as Prof. Horrigan warns, business managers began imitating Derrida & Company. To identify all philosophy with the postmodernist gang, however, seems even less accurate than to equate all management education with the Harvard Business School.
Prof. Lehman is right to point out that my article offers more criticism than it does a constructive program. To this I plead, first, that there is value in supplying a corrective to the incredible lack of self-criticism that characterizes the management world, and second, that perhaps he will consider buying my next book, in which I plan to offer a solution to all management problems.
Jeffrey Rosen repeatedly labels as “draconian” measures that would protect unborn children and guarantee their most basic human rights (“The Day After Roe,” June Atlantic). From the standpoint of these children, it is our current abortion jurisprudence that is draconian. During the first nine months of a child’s life, the present system offers virtually no legal protection against being killed.
The oft-heard response that children in the prenatal stages of development are not actually persons is illiberal, to say the least. This is the same argument that was used against African Americans in the Dred Scott case. The Supreme Court held that Negroes had no rights because they were not really persons.
Advocates of legal abortion may reply that the African Americans in question were already born, whereas unborn children are still inside their mothers’ bodies and therefore are not yet persons. But this argument merely substitutes age for skin color as the criterion by which we may deem someone a nonperson and claim him or her as our private property to dispose of as we please. In both cases, an arbitrary line has been drawn to exclude a whole class of human beings from the community of our common care and concern.
In Jeffrey Rosen’s view, a future Supreme Court decision overturning Roe v. Wade would merely “allo[w] the states to ban or restrict abortions from the very beginning of pregnancy,” and in most states, “early-term abortions would be protected and late-term ones restricted.” But there is no assurance that a decision overturning Roe would be that deferential to the legislative process. Rosen’s pro-democratic vision reminds me of a debate early this year in Manhattan on the Alito nomination, sponsored by rival groups of lawyers, the Federalist Society, and the American Constitution Society. The pro-Alito advocate repeatedly argued against Roe v. Wade on the grounds that democratically elected state legislatures should shape our responses to morally sensitive life-and-death matters like abortion. However, when asked whether the Oregon assisted-suicide law, enacted by such a legislature and twice upheld by referendum votes, should be upheld out of deference to state legislatures, she backed away and, much like Chief Justice John Roberts, argued that in that case the federal interest in regulating lethal drugs could override democratic lawmaking.
Her reply suggested that when pro-lifers are faced with a choice between rule by state legislatures and rule by federal judges, their only meaningful standard is the desired pro-life result. While I respect the pro-democratic logic of Rosen’s analysis, I don’t think he has considered the extent to which many pro-life lawyers and judges have decided that their moral vision trumps deference to democracy and legislators.
Jonathan S. Gellman
Jeffrey Rosen replies:
As these letters suggest, there are entrenched views on both sides of the abortion issue that are not likely to be changed by democratic debate. My piece suggested, however, that if Roe were overturned, legislatures would eventually—although not immediately—reflect the more moderate views of the majority of Americans, protecting the right to choose abortion early in pregnancy and restricting it later in pregnancy.
Benjamin Schwarz’s review of A. C. Grayling’s Among the Dead Cities (June Atlantic) didn’t address the reason the British switched over to nighttime area bombing in the first place. Early in the war, the Royal Air Force attempted precision daylight bombing of legitimate targets such as military depots and armaments factories. The Luftwaffe’s day fighters slaughtered the pathetically under-armed RAF bombers. To save the lives of the aircrews, Bomber Command had no choice but to switch to night operations.
However, although night’s concealing cloak gave the bomber crews a better chance of surviving a given mission, it did nothing for their ability to hit a target. They had trouble finding the target, much less bombing it: there are cases on record where villages of no military significance fifty miles from the target were bombed in error, setting fires on the ground, following which the entire bomber stream dropped its payload on the hapless village because the bombers’ crews thought the fires had to be the target.
Given these two factors—quite aside from Arthur “Bomber” Harris’s deeply held belief that by bombing the dwellings of the German workforce he could break Germany’s morale—it was almost inevitable that the RAF would bomb wide areas instead of pinpoint targets. For Grayling and Schwarz to ignore the technical factors involved in this decision is irresponsible. Why attribute to deliberate malice—though there was enough of that in Harris’s bombing policy—what can be explained by simple necessity?
Benjamin Schwarz replies:
Mr. Jaruk’s arguments are chronologically confused. Certainly, early in the war the RAF lacked the tools and the doctrine to undertake precision bombing against German targets. But at that point it was unable to carry out an area-bombing offensive as well. Area bombing wasn’t a strategy to which the RAF turned because it couldn’t bomb precisely. Rather, it was a strategy—really a theory, embraced for honest if somewhat erroneous reasons—for which it had to develop a capability. Mr. Jaruk suggests that area bombing damaged civilian targets almost incidentally. Nothing of the sort. The theory of area bombing, “an absolutely devastating, exterminating attack by very heavy bombers … upon the Nazi homeland,” as Churchill put it, was predicated on massacring or threatening to massacre civilians.
Area bombing only began in earnest in mid-1943, after the British and the Americans decided at the January Casablanca conference to launch an all-out “combined bomber offensive” (largely to appease their Soviet allies, who were clamoring for an offensive in the West), and when Bomber Command at last had the resources and the equipment (a large fleet of heavy bombers and sophisticated navigational aids) to bomb on the scale that the theory of area bombing demanded. Grayling’s indictment of area bombing begins at this date, not, as Mr. Jaruk would have it, “[e]arly in the war.” Further undermining Mr. Jaruk’s “simple necessity” argument is the fact that at this point, when the RAF began its intense area-bombing efforts, the U.S. Army Air Force began its intense daylight precision-bombing campaign against Germany. Moreover, in the period from the autumn of 1943 to the spring of 1944, the USAAF’s newly deployed long-range fighters effectively demolished the Luftwaffe’s fighter force. This meant that during the strategic bombing offensive that began in September 1944 (from late winter 1944 until the fall, Allied bombing efforts concentrated almost exclusively on tactical support of the Normandy invasion), the skies above the Reich were essentially free of the German fighter menace that Mr. Jaruk maintains impelled Bomber Command’s area-bombing strategy in the first place. Still, the RAF’s area bombing continued for another seven months, until the last days of the war—even as the USAAF’s precision-bombing efforts were demolishing Germany’s transport systems, its armaments industry, and, most important, its oil facilities. This was the period in which Bomber Command’s forces inflicted by far their heaviest damage on German civilians (72 percent of Allied bombs dropped on Germany throughout the war were dropped after July 1944), and this is the period—again, a period when the threat of German fighters had been eliminated, and when precision bombing would have been relatively easy—on which Grayling concentrates his indictment.
I greatly enjoyed Geoffrey Wheatcroft’s essay “Non-native Sons” (June Atlantic), but I would like to point out one small error regarding the 1968 Manchester United team he mentions in passing. Wheatcroft describes the team as including only two non-English players, but Paddy Crerand, who like Denis Law is from Scotland, was also on the pitch that day for the Red Devils when they hoisted the cup. (As you can see, my interest in this matter is purely relative.)
Patrick J. Crerand
Geoffrey Wheatcroft replies:
My usual response when detected in error is Samuel Johnson’s “Ignorance, madam, pure ignorance,” but in this case I can’t even use that excuse, since I remember the 1968 European Cup final vividly and know exactly who played for Manchester United. Apart from Law and Best, the team of course included Mr. Crerand’s illustrious kinsman Paddy, and also Shay Brennan, who was English-born but played for the Republic of Ireland. My mistake was the clumsy wording “international only in that it included …” This was not meant to suggest that all the players except Law and Best were English. I can do no more than thank Mr. Crerand and apologize: Faulty syntax, sir, pure faulty syntax.
In “Purple Mountains” (July/August Atlantic), Ryan Sager completely mischaracterizes New Mexico, describing it as a previously reliable Republican state that is now morphing into a swing state. But New Mexico has always been a swing state: it has voted for the loser in only two presidential elections (in 1976 and 2000) since it became a state. Its electoral votes have gone to the Democrats twelve times and to the Republicans twelve times—which is about as evenhanded as it gets.
Ryan Sager replies:
Mr. Minner mistakes the sweep of my argument about the interior West’s recent redness. In fact, 2004 was the first election since 1988 in which a Republican candidate has won all eight states of the interior West. Al Gore won New Mexico in 2000; in 1996, in the course of a handy reelection, Bill Clinton won New Mexico, Arizona, and Nevada; in 1992, Clinton won New Mexico, Colorado, Montana, and Nevada. From 1968 to 1988, however, the Republican presidential candidate won every single interior West state for six elections running. My argument was that just as the South (never 100 percent Democratic, but “solid” nonetheless) once realigned in favor of the GOP, the West has the potential to birth a Democratic resurgence in the not-too-distant future.
An illustration accompanying “A Confederacy of Eunuchs” (July/August Atlantic) erroneously depicted Paul Martin as the current prime minister of Canada. Martin in fact left office in February, and was succeeded by Stephen Harper. In “Idealism and Practicality” (July/August Atlantic), the political scientist Stanley Hoffmann’s surname was mispelled. The July/August Atlantic was inaccurately identified as “Vol. 297, No. 6.” The issue should have been labeled “Vol. 298, No. 1.” We regret the errors.