Joshua Green’s article on the pollsters’ inaccuracies during the 2004 presidential election (“Do Polls Still Work?” November Atlantic) fails to consider that the 2004 exit polls may have been correct after all. Green starts with a predetermined conclusion for which he cites no official confirmation. There is much research and evidence to support the theory that the exit polls were right, and that the final count was manipulated—evidence that has sparked several lawsuits that the national media have ignored.
Exit polls are historically very accurate—so accurate that in most democracies, they are used to detect fraud. The discrepancy between the final outcome and the exit polls in the Ukrainian presidential race, for instance, was the basis for challenging the legitimacy of that election. In testimony before the U.S. House Committee on International Relations, this discrepancy was cited as the major indicator that the Ukrainian election had been stolen.
In many democracies, elections are conducted with paper ballots and counted by hand under public supervision; computers are not trusted with the process. Not so in this country. The GOP’s endorsement of computerized voting without paper trails is highly suspect, as is the privatization of the vote-count process (to companies with Republican CEOs). Add in various forms of voter disenfranchisement and you have a real problem with legitimacy.
Susan L. Olsen
Joshua Green replies:
Of course there was “official confirmation” of the 2004 election results. It came when George W. Bush was certified as the winner and John Kerry conceded. The only predetermined conclusion I can spot is your insinuation that the election was somehow stolen by Republicans—a charge for which there is no compelling evidence at all.
In reading about the exploits of Michael Mateas and Andrew Stern (“Sex, Lies, and Video Games,” November Atlantic), I was reminded of a scene from the film version of Fahrenheit 451. The fireman’s wife, Mrs. Montag, is sitting in front of her room-sized television screen watching a soap opera. Suddenly, the characters turn to her and ask, “What do you think, Mrs. Montag?” What she decides determines how they proceed. François Truffaut (and Ray Bradbury) used this scene to depict the mesmerizing power of video media, which is exerted at the expense of all other forms (especially books) and entraps people with an enveloping technological superiority.
Messrs. Mateas and Stern turn the warning on its head and call it art. But books (or any traditional art form) have one thing that interactivity will never have: a natural respect for the human need to experience perspectives different from one’s own. Whereas adding one’s own point of view to a movie or video game would ultimately only reinforce that point of view. In the end, however clever it may be, it is an act of narcissism, not art.
I read with interest James Fallows’s article “Artificial Intelligentsia” (October Atlantic), but I wouldn’t be overly sanguine about the idea that easy access to facts and categorizing aids will improve our thinking. This argument rests on the assumption that the mind is a kind of blackboard: Put more and better-organized information on the board, and thinking will be easier.
But research tells us that the analogy is probably inaccurate. The brain is a network that functions through the establishment, growth, and pruning of neuronal connections; it is these shifting electrochemical patterns that constitute thought. Furthermore, all mental activity has the potential to alter the structure of the brain itself. It is precisely the weight of the innumerable cognitive operations we perform throughout our developing years that creates our adult minds. Memorizing information, while not the most glamorous of cognitive tasks, is nonetheless a part of this architectural process. Less memorization means less thinking. Organizing and categorizing information—shifting across the borders of perception, memory, and reason in the process—would seem to possess even more sculpting power. This is why our blackboard analogy, where the mind is the blackboard and information is the writing, is false. No meaningful distinction can be made between the contents of our thoughts and our mind. Thinking is the mind. And to cede portions of our cognitive kingdom to computers for the sake of convenience seems unwise.
None of this means that mental aids such as dictionaries and encyclopedias should be avoided. And, as Fallows suggested, perhaps their electronic counterparts will lead to new ways of creating and knowing. But, as Goethe said, “First master the rules, then discard them.” Young minds especially need to exercise all of their mental muscles in a comprehensive and disciplined way before being given all the answers by a blinking screen.
It’s puzzling that the media scratch their collective head every time yet another U.S. agency gets caught funding, training, transporting, protecting, or generally babysitting terrorists (“Twilight of the Assassins,” November Atlantic). It’s naive to believe our politicians’ rhetoric that terrorism is the new Nazism and thus anathema to our government. For decades, what we now term “terrorism” has been just another instrument in the realpolitik toolbox, openly discussed (albeit behind closed doors) and repeatedly implemented by people at the highest level of the national-security community. The fact that the protection of Luis Posada and Orlando Bosch has continued under both Democratic and Republican control of the White House and Congress reflects the simple fact that the government’s military, intelligence, and law-enforcement organs maintain their personnel for decades. Despite the imprisonments and firings of a few fall guys, the players remain the same throughout the years.
People like Posada and Bosch are allowed to remain at large because they are useful to this “black ops” cadre. Cuba’s own use of Posada as a poster boy for U.S.-backed terrorism only shows that Fidel Castro’s government is just as cynical as ours. With Posada on the loose, we can use him to kill; they can use him to propagandize; everybody wins. Everybody, that is, except the citizens of both countries whose trust in their own government leaves them dead—collateral damage in the big chess game.
Barbara Wallraff speculates (“Word Court,” November Atlantic) that printers who set type by hand commonly used an extra space after sentence-ending periods because inserting space there saved the work of spacing more evenly elsewhere. I learned to set type both by hand and by machine at South Dakota State University’s Printing and Rural Journalism school during the summer of 1953. We were taught to use less space after periods than elsewhere because aesthetically the period (and the comma) already gives the illusion of extra white space. Lazy printers may have inserted extra space after sentence-ending periods, but conscientious printers did not. In my opinion, the practice of inserting two spaces between sentences ignores aesthetics and arose with stenography, not with printing. I’ve seen the two-space instruction in various typewriting and secretarial manuals.
Barbara Wallraff replies:
What I wrote wasn’t exactly speculation. It’s an easily verified fact that extra space appears between sentences in many books published in the early twentieth century. As for the reason for the space, before publishing my theory, I ran it by the curator of a museum of the history of printing, who said it sounded good to him. But I’m grateful for John Bowers’s information. And I second his opinion that one space between sentences just plain looks better than two.