I was surprised and disappointed to see that not a single American from the pre-Revolutionary period was included in your feature “They Made America” (December Atlantic). More specifically, I was surprised that a definition of what constitutes “American” was not among the five challenges faced by your distinguished panel.
I would posit that American ideals had their roots in men and women living in America before the United States ever existed. Indeed, were it not for certain enterprising souls, the United States—as we know it—may never have come into being. I can understand why the panel might omit Thomas Hariot, John Winthrop, or Anne Bradstreet; however, omitting Captain John Smith and William Bradford seems unpardonable, even if their citizenship was technically English.
Smith was one of America’s staunchest promoters and gave us our notion of the new country as a land of plenty and opportunity. Long before Ben Franklin, Smith represented our iconic rags-to-riches story; in varying retellings of the Pocahontas story, he gave us one of our first legends; and, of course, it was Smith’s tenacity that kept the first permanent English colony, Jamestown, afloat during its most tenuous times.
William Bradford also merits our recognition. The first governor of the Plymouth Plantation, Bradford led the Pilgrims to the shores of New England. He not only recorded our earliest history, but also established the idea of America as a land of destiny, and a place where one was free to worship in one’s own way.
George C. Scouten
It was striking that your list of influential Americans did not include any playwrights. More remarkably, your companion article, which otherwise analyzed the composition of the list quite meticulously, did not even find this absence of playwrights worthy of comment. In contrast, a 2002 BBC series ranked Shakespeare the fifth “Greatest Briton,” and in a recent poll by French public television, Molière was rated the eighth “Greatest Frenchman of All Time.” Surely Eugene O’Neill, Tennessee Williams, Arthur Miller, Lillian Hellman, and, more recently, August Wilson and Wendy Wasserstein have shaped our country significantly.
Jacob M. Appel
New York, N.Y.
I cannot imagine how your list of influential Americans omitted the late Milton Friedman. Friedman did more than any other twentieth-century American to make the United States prosperous—more than any president, business leader, or technologist. He was a trailbreaking economist, winning the Nobel Prize in 1976. He provided the intellectual framework that enabled Paul Volcker and Ronald Reagan (No. 17 on your list) to heal the U.S. economy when it was on track to become a high-inflation, high-unemployment banana republic in the style of Peronist Argentina. His work was also a major reason why the years since 1982 have had only minor, shallow pauses in increasing American prosperity.
Rancho Palos Verdes, Calif.
In “They Made America,” your entry for Dr. Jonas Salk describes his vaccine as having eradicated polio. While it is true that Dr. Salk developed the first vaccine against polio, the disease is still in circulation today in some parts of the globe. The World Health Organization, with the support of Rotary International, is now in the advanced stages of an effort to eradicate the disease primarily with the Oral Polio Vaccine, the vaccine developed by Dr. Albert Sabin. Indeed, the elimination of polio in most countries was accomplished with Sabin’s vaccine, not Salk’s.
Like thousands of other readers, I eagerly perused your listing of the 100 Most Influential Americans, and I was disturbed to find myself missing. I’ve spoken to my mother, my children, and other objective sources, and they all confirm that, even taking into account challenges such as “the collaborative nature of achievement” and “the power of pop culture,” my influence still has a pervasiveness that is well-nigh legendary. Am I funnier than Andrew Carnegie? Absolutely. More attractive than Susan B. Anthony? Without question. Do I possess greater musical talent than Frederick Law Olmsted, a sunnier disposition than John Brown, better teeth than George Washington? The facts speak for themselves.
And yet, and yet … your panel chose to ignore these attributes and instead embrace a list that includes historical footnotes such as Woodrow Wilson, who was unable to play Ping-Pong (which I excel at), Eli Whitney (who certainly cannot type ninety words per minute, as I can), and Lewis and Clark (who don’t even have first names!).
What do I have to do—die? (No, I see Bill Gates is up there.) Invent the cotton gin? (Already been done.) Resign the presidency? (Ditto.) Well, what’s done is done. It’s probably too late to ask you to pulp the issue, and I imagine I shall keep reading The Atlantic, but honestly—have you no sense of decency, sirs, at long last?
In focusing almost exclusively on the idea of “variety and comfort,” Virginia Postrel’s “In Praise of Chain Stores” (December Atlantic) ignores several far more important objections to retail homogenization. That these ubiquitous establishments make life more boring and less varied is little more than an ancillary complaint. The bigger objection to the likes of Wal-Mart, McDonald’s, and Home Depot is that they hurt real people. They typically pay their workers abysmally low wages and offer virtually no benefits, treating them more like expendable pawns than human beings. (Some chains, including Starbucks, are notable exceptions to this rule.) They offer low prices by strong-arming small-scale manufacturers, desperate to stay alive at all, into economically unfavorable agreements. They grease government wheels to turn in their favor. They knowingly purchase products manufactured in Third World countries by exploited workers who toil under abominable conditions. They artificially deflate their prices just long enough to drive local mom-and-pop establishments out of business and then jack prices back up once they’ve cornered the market.
I have no desire for major chains to disappear from the American landscape (and I’d be naive to even think this were possible). As Postrel points out, their value and selection provide an important market segment for American consumers. However, I do wish they would play by a fair set of rules, instead of using their clout to create unfair advantages at the expense of the companies that manufacture their products, the individuals who staff their stores, and the small-business owners who compete with them.
Virginia Postrel replies:
I could happily devote a 1,500-word column to refuting each of Michelle Trela’s heartfelt but undocumented complaints. Unfortunately, The Atlantic’s readers (not to mention its editors) would soon tire of the endless chain-store series. Many of these criticisms, notably the “problem” of low prices, date back at least a century, to the rise of department stores and the spread of chain grocers like A&P. That’s why so much of Emile Zola’s The Ladies’ Paradise, published in 1883, seems contemporary. Suffice it to say that we would not be better off, either as workers or as consumers, with a return to the old days of high-priced local monopolies.
I focused on the “homogenization” critique because it is the most common current complaint among elites and because it is the only complaint that is actually about chains in general, rather than about a particular chain or store format. The activists who campaign to keep Ann Taylor, Crate & Barrel, and California Pizza Kitchen out of currently moribund downtowns are not doing so to save the high-paying jobs at Dottie’s Boutique, Joe’s Furniture Store, and Luigi’s Pizzeria. They hate Starbucks, whatever its wages and benefits. Like local retailers who simply don’t want the competition, they just don’t like the idea of chains.
Joshua Green’s article on the pollsters’ inaccuracies during the 2004 presidential election (“Do Polls Still Work?” November Atlantic) fails to consider that the 2004 exit polls may have been correct after all. Green starts with a predetermined conclusion for which he cites no official confirmation. There is much research and evidence to support the theory that the exit polls were right, and that the final count was manipulated—evidence that has sparked several lawsuits that the national media have ignored.
Exit polls are historically very accurate—so accurate that in most democracies, they are used to detect fraud. The discrepancy between the final outcome and the exit polls in the Ukrainian presidential race, for instance, was the basis for challenging the legitimacy of that election. In testimony before the U.S. House Committee on International Relations, this discrepancy was cited as the major indicator that the Ukrainian election had been stolen.
In many democracies, elections are conducted with paper ballots and counted by hand under public supervision; computers are not trusted with the process. Not so in this country. The GOP’s endorsement of computerized voting without paper trails is highly suspect, as is the privatization of the vote-count process (to companies with Republican CEOs). Add in various forms of voter disenfranchisement and you have a real problem with legitimacy.
Susan L. Olsen
Joshua Green replies:
Of course there was “official confirmation” of the 2004 election results. It came when George W. Bush was certified as the winner and John Kerry conceded. The only predetermined conclusion I can spot is your insinuation that the election was somehow stolen by Republicans—a charge for which there is no compelling evidence at all.
In reading about the exploits of Michael Mateas and Andrew Stern (“Sex, Lies, and Video Games,” November Atlantic), I was reminded of a scene from the film version of Fahrenheit 451. The fireman’s wife, Mrs. Montag, is sitting in front of her room-sized television screen watching a soap opera. Suddenly, the characters turn to her and ask, “What do you think, Mrs. Montag?” What she decides determines how they proceed. François Truffaut (and Ray Bradbury) used this scene to depict the mesmerizing power of video media, which is exerted at the expense of all other forms (especially books) and entraps people with an enveloping technological superiority.
Messrs. Mateas and Stern turn the warning on its head and call it art. But books (or any traditional art form) have one thing that interactivity will never have: a natural respect for the human need to experience perspectives different from one’s own. Whereas adding one’s own point of view to a movie or video game would ultimately only reinforce that point of view. In the end, however clever it may be, it is an act of narcissism, not art.
I read with interest James Fallows’s article “Artificial Intelligentsia” (October Atlantic), but I wouldn’t be overly sanguine about the idea that easy access to facts and categorizing aids will improve our thinking. This argument rests on the assumption that the mind is a kind of blackboard: Put more and better-organized information on the board, and thinking will be easier.
But research tells us that the analogy is probably inaccurate. The brain is a network that functions through the establishment, growth, and pruning of neuronal connections; it is these shifting electrochemical patterns that constitute thought. Furthermore, all mental activity has the potential to alter the structure of the brain itself. It is precisely the weight of the innumerable cognitive operations we perform throughout our developing years that creates our adult minds. Memorizing information, while not the most glamorous of cognitive tasks, is nonetheless a part of this architectural process. Less memorization means less thinking. Organizing and categorizing information—shifting across the borders of perception, memory, and reason in the process—would seem to possess even more sculpting power. This is why our blackboard analogy, where the mind is the blackboard and information is the writing, is false. No meaningful distinction can be made between the contents of our thoughts and our mind. Thinking is the mind. And to cede portions of our cognitive kingdom to computers for the sake of convenience seems unwise.
None of this means that mental aids such as dictionaries and encyclopedias should be avoided. And, as Fallows suggested, perhaps their electronic counterparts will lead to new ways of creating and knowing. But, as Goethe said, “First master the rules, then discard them.” Young minds especially need to exercise all of their mental muscles in a comprehensive and disciplined way before being given all the answers by a blinking screen.
It’s puzzling that the media scratch their collective head every time yet another U.S. agency gets caught funding, training, transporting, protecting, or generally babysitting terrorists (“Twilight of the Assassins,” November Atlantic). It’s naive to believe our politicians’ rhetoric that terrorism is the new Nazism and thus anathema to our government. For decades, what we now term “terrorism” has been just another instrument in the realpolitik toolbox, openly discussed (albeit behind closed doors) and repeatedly implemented by people at the highest level of the national-security community. The fact that the protection of Luis Posada and Orlando Bosch has continued under both Democratic and Republican control of the White House and Congress reflects the simple fact that the government’s military, intelligence, and law-enforcement organs maintain their personnel for decades. Despite the imprisonments and firings of a few fall guys, the players remain the same throughout the years.
People like Posada and Bosch are allowed to remain at large because they are useful to this “black ops” cadre. Cuba’s own use of Posada as a poster boy for U.S.-backed terrorism only shows that Fidel Castro’s government is just as cynical as ours. With Posada on the loose, we can use him to kill; they can use him to propagandize; everybody wins. Everybody, that is, except the citizens of both countries whose trust in their own government leaves them dead—collateral damage in the big chess game.
Barbara Wallraff speculates (“Word Court,” November Atlantic) that printers who set type by hand commonly used an extra space after sentence-ending periods because inserting space there saved the work of spacing more evenly elsewhere. I learned to set type both by hand and by machine at South Dakota State University’s Printing and Rural Journalism school during the summer of 1953. We were taught to use less space after periods than elsewhere because aesthetically the period (and the comma) already gives the illusion of extra white space. Lazy printers may have inserted extra space after sentence-ending periods, but conscientious printers did not. In my opinion, the practice of inserting two spaces between sentences ignores aesthetics and arose with stenography, not with printing. I’ve seen the two-space instruction in various typewriting and secretarial manuals.
Barbara Wallraff replies:
What I wrote wasn’t exactly speculation. It’s an easily verified fact that extra space appears between sentences in many books published in the early twentieth century. As for the reason for the space, before publishing my theory, I ran it by the curator of a museum of the history of printing, who said it sounded good to him. But I’m grateful for John Bowers’s information. And I second his opinion that one space between sentences just plain looks better than two.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.