In the late sixties a chess-playing, computer program was written at MIT and was entered into some local tournaments, where it won a number of games and caught the interest of the local newspapers. I had been curiously following the portentous visions that arose out of articles on the "cybernetic revolution" and was still unsure what to make of the Computer. Since I play chess, this new program seemed to offer a chance to sample its mysteries first hand. I called some friends at MIT, and they arranged for me to play MacHack, as the program was known.
The room in which the computers were kept lacked all signs of diurnal rhythm. There were no windows. The illumination was low, so as not to interfere with the phosphor screens. The only sound was the clatter of high-speed readout printers, and underneath that, the hum of air conditioners and circulators. People quietly came and went with perfect indifference to the hour. I found the scene—the rapt and silent meditations of the programmers hunched over their terminals, the background hum with its suggestion of unceasing activity, the hushed light, the twenty-four-hour schedules—subtly exhilarating.
I was shown how to code the moves and enter them into a terminal. The game itself began with a stock opening line: both the computer and I knew the standard chess moves, and so far as I could tell, to about the same depth. I had decided on what I thought would be a winning strategy. Any programmer, I reasoned, would try to make the positions which his program had to evaluate simple ones and would assign a priority to clarifying exchanges. I therefore set out to make the position as complex as possible, hoping that the machine would lose its way among the options and commit a common strategic blunder, entering into a premature series of exchanges that would end only by increasing the activity of my pieces. Instead, in a flurry of exchanges, I lost a pawn and nearly the game. The trick of playing with MacHack, I learned, is to keep the position free from tension. The program's strong point is tactics; it places priorities on piece mobility and material gain, and in the nature of chess these values generate local, tactical give-and-take.
So my strategy was to play away from the program's abilities and to steer the game into slow-paced, stable, balanced positions. Whenever I did this, MacHack's game seemed to become nervous and moody. The program would lose its concentration, begin to shift objectives restlessly, and launch speculative attacks. This is not an unfamiliar style; every chess club has some players—they are called "romantics "—whose joy is found in contact and tension, in games where pieces flash across the board and unexpected possibilities open up with each new move. Put them in slow positions, and, like MacHack, they grow impatient and try to force their game.
We played no more than five times; eventually, beating it became too easy. The winning formula was mechanically simple: develop cautiously, keep contact between the two sides restricted, let the pawns lead out the pieces. MacHack would always develop in a rush and send its knights and bishops skittering about the board trying to scare up some quick action; denied that action, its position would collapse in confusion. The only way to lose to MacHack, I concluded, would be to play as though the dignity of Man somehow required one to crush the machine in the first dozen moves. If, instead, one just played away from it, the computer would barrel by and fall in a heap. I was far more bored than I would have been playing a human of similar strength, and I came to feel that even if MacHack had been good enough to win most, or all, of its games I still would have felt I was wasting my time. In the middle of the nineteenth century, an enterprising showman hid a chess-playing dwarf in a cabinet and toured Europe, claiming that he had invented a chess-playing automaton. Large crowds were awed by the phony machine. My experience with MacHack suggested that the crowds must have come not only because the "automaton" appeared to be a machine but because the dwarf was a master, and could consistently win.
During the last two games I played, MacHack refused to give its moves when I was about to checkmate it. My curiosity was piqued at this sullenness, and I stayed, trying to wait the machine out and get a reply. MacHack just hummed at me. Finally a programmer, becoming interested in this delay, extracted the record of MacHack's deliberations. It had been working over the mate variations, just looking at them, over and over. "Must be a bug somewhere," the programmer said.
Every culture has its juvenile embarrassments; misdirected enthusiasms which fail dramatically and in retrospect seem to say something humiliating about the civilization that pursued them. The great computer craze of the late fifties and the sixties is such a case. From the erecting of the machine, any number of respected thinkers derived a vision of society. Edward Teller foresaw an automatic world, ruled by machines. Gerard Piel, publisher of Scientific American, wrote and spoke about the "disemployment of the nervous system." C. P. Snow thought that automation would be a revolution with effects "far more intimate in the tone of our daily lives ... than either the agricultural transformation in Neolithic times or the early industrial revolution." "Is the handwriting on the wall for the labor movement?, the Wall Street Journal asked, looking at the matter from its own perspective. ("Their membership may dwindle, their strike power weaken, and their political strength fade. And some of unionism's biggest names may be lesser names tomorrow.") The Ad Hoc Committee for the Triple Revolution (weaponry, automation, human rights), which was a study group composed of social luminaries like Gunnar Myrdal, Linus Pauling, A. J. Muste, Michael Harrington, Bayard Rustin, Irving Howe, Robert Heilbroner, and Tom Hayden and Todd Gitlin of SDS, saw the coming of automation as an argument for a guaranteed minimum income. "In twenty years," wrote Donald Michaels in a Center for the Study of Democratic Institutions book, "most of our citizens will be unable to understand the cybernated world in which we live ... the problems of government will be beyond the ken even of our college graduates. Most people will have had to recognize that, when it comes to logic, the machines by and large can think better thanl they.... There will be a small, almost separate society of people in rapport with the advanced computers. These cyberneticians will have established a relationship with their machines that cannot be, shared with the average man. Those with the talent for the work probably will have to develop it from childhood and will be trained as extensively as classical ballerinas." Professor John Wilkinson of the University of California called for the founding of human sanctuaries "as we establish refuges for condors and whooping cranes."
The pragmatists among those who worried about "America in the 'Automic' Age" thought about unemployment. The Bureau of Labor Statistics estimated that 300,000 workers were replaced annually by machines; the American Foundation of Employment and Automation calculated that 2 million jobs a year vanished. President Kennedy said in 1962 that adjusting to automation as America's greatest domestic "challenge" of the sixties, which puts his negative prescience quotient as high as anyone else's. Harry Van Arsdale won the New York electricians a five-hour day, and there was strong feeling that this was just a beginning. "The only question," said George Meany, "is how short the work week is to be."
But there was a visionary wing as well, and one which achieved, to judge by the number of scare stories which ran in the media, remarkable impact. Very roughly, two scenarios were discernible. The first was that automation would proceed at an ever accelerating rate until computers had entirely displaced the working and lower-middle classes. (I find it stimulating that Robbie the Robot, the famous automaton from the movie Forbidden Planet, whose capable and compliant nature earned him his own TV series, had ebony skin.) Those classes, once thrown out of work, would mill about in proletarian discontent. Then, depending on the perspective of the seer, they would either sponsor a revolution themselves or force a revolutionary response from the established order. Andrew Hacker of Cornell warned about "the contraction of the corporate constituency" and predicted a Luddite rampage. Margaret Mead proposed protecting by law certain jobs, "dustman, the night watchman, the postman." She was particularly worried about the problem of the lowest intelligence "brackets," and did not, at least for this class, favor a minimum income: "I am not sure whether good pay in idleness would be a very healthy thing just for the least intelligent, who are least able to make good use of their leisure." This scenario concluded with the feeling that if America did, by one route or the other, successfully manage its entry into "The Age of Abundance," the result would be a classless world in which all lived in a leisurely upper-middle class style, devoting themselves to the arts and public improvements.
The other line of thought, often found in journals like Argosy, National Enquirer, and Popular Mechanics, was that the new brain machines would displace the upper-middle class. The writers who held this second view were impressed with the machine's potential for autonomy and its inscrutable authoritativeness. ("Harvard Computer Finds English Language Fuzzy"—Science Digest.) While it was not clear that unemployment would be a problem ("Wanted: 500,000 Men to Feed Computers"—Popular Science), what did emerge was the feeling that everyone would be forced, by the appealability of the computer's decisions, into the essence of the lower-middle class experience, which is to be ordered about by those "who know what they're doing."
Nearly fifteen years have passed since these specters first became popular, and clearly we are no further down either of these roads; instead, there has been a perceptible loss of conviction that we are on any road at all. The rates of increase in productivity per man-hour, one of the classic mean elements of automation, were no different in the sixties than in the fifties, though nearly 200,000 computers were installed during the last decade. Underemployment has held roughly stable. Computers have assumed a number of functions, some of which have been historically white-collar jobs: reservations, credit and billing, processing checks, payroll operations, inventory scheduling; and some blue-collar: freight routing, and especially flow monitoring and process control in the metallurgical, petrochemical, paper, and feed industries. But while what the computers do is important, it certainly does not appear to add up to a revolution. If computers posed, and pose, a threat it lies not in rendering less significant those decisions humans make but, as in the privacy issue, in enlarging the impact of, and the opportunities for, the staple villainies of the Old Adam.
Why were so many illustrious thinkers so wrong? Or, perhaps simpler, why have we been so reluctant to learn from their mistakes? "Latest Machines See, Hear, Speak and Sing—And May Outthink Man" is the headline of a Wall Street Journal story that appeared in June, 1973, but it could as easily have been the head on any number of stories over the last fifteen years.
What is striking about these stories is the determination of their authors to believe. They seem never to notice the highly artificial environments or the extremely simplified nature of the problems which allow the computer programs they describe to show even the modest success they have to date. Do the authors ever ask why it is that assembly line jobs, whose tediousness made them famous targets of opportunity for computers, remain virtually untouched by automated hands?
The vatic winds which blew some fifteen years ago were more comprehensible: America had just emerged from the fifties, an extraordinary decade. Never before had we delighted in such a rain of innovations with such an immediate and intimate effect on our daily lives. Television took root everywhere. The Polaroid camera, the Aqualung, the transistor radio, and the birth-control pill came on the market. The hi-fi and stereo industry sprang up. Commercial jet travel became standard. Polio was controlled. The hydrogen bomb, the ICBM, space satellites, and the computer all were significant public issues, altering patterns of discourse and attention if nothing else. Xerox brought out its first office copier in 1959; the first working model of the laser was announced in 1960.
We took these inventions, some boon, some bane, as evidence that a high level of innovation was a settled feature of America, and assumed that that level would, if anything, rise still higher over the decades to come. In that atmosphere no technological achievement seemed beyond us and no forecast too fantastic. It was felt only realistic to advance bold speculations.
Actually, one promise of the "soaring sixties" came spectacularly true—the moon-landing program. But it came to seem increasingly anomalous, not representative of our national direction, certainly not emblematic of our national mood. The sixties was a decade in which apprehensions about the effects of technology became widespread, and glittering inventions ceased to enhance our daily lives. Indeed, aside from the pocket calculator, the introduction of new products has fallen off drastically in the last ten years. The promise of robotics is not the only promise unkept. Cancer and the common cold have not been cured; nuclear power through fusion seems more distant than ever. Cheap desalinization has not been achieved. One of the pioneering computers, ENIAC, built by Eckert and Mauchly, was invented in the hope that it would facilitate long-range weather forecasting, Almost certainly John Mauchly thought he was closer to that goal in 1943 than meteorologists do today.
"From Your Lips to Your Printer" (December 2000)
Finally, voice-recognition software that (almost) lives up to its promise to liberate those unable or unwilling to type. By James Fallows
"Lost in Translation" (December 1998)
Efforts to design software that can translate languages fluently have encountered a problem: how do you program common sense? By Stephen Budiansky
"The Translating Machine" (August 1959)
Even with increased staffs of translators, the United States is able to put into English less than half the year's grist of scientific material from other countries. But much has been accomplished toward creation of an electronic computer that will read and translate the printed page. By David O. Woodbury
The persistence of the belief that machine intelligence is within our grasp thus becomes all the more curious, since it can draw support from neither specific achievements nor the general pace of the nation's technology. It has been a costly faith. To point to only one example, $20 million was spent by the CIA, the Department of Defense, and other government agencies on automatic language translation until 1966, when a review committee of the National Academy of Sciences concluded that the prospect of readable translations seemed to be receding in proportion to the money spent on it.
The effort to get machines to learn, see, hear, deduce, and intuit—to achieve what is called "Artificial Intelligence," or AI—has received little popular attention, presumably, at least in part, because of this conviction that AI is already a fact. Who, except for the handful of professionals involved, has even a vague sense of why artificial intelligence has proven to be so difficult a task, what the problems are, how they are being attacked, and what theories have been proposed and abandoned? It seems bizarre that in a culture as interested in psychology and intelligence as ours the questions that have occupied this small community have been so widely ignored. AI researchers are, in a sense, applied epistemologists and are attacking problems which can have considerable public interest, as Piaget, Chomsky, and Skinner, to mention only three names, have shown.
The approach of an AI researcher is different from that of a philosopher or theoretical psychologist, of course. The point of traditional scientific theory is to account for the evidence with a concise structural metaphor. If this metaphor succeeds in explaining a wide range of observations coherently and economically, it is accepted, even if its "real" basis, its actual neurophysiology, remains obscure. AI scientists, on the other hand, try to build devices which will produce some of the behaviors they are interested in. The working assumption is that they will eventually arrive at an understanding of intelligence no less meaningful than that reached through more traditional routes.
The popular assumption was rather more simple. It seems to have been that the potential of the machine is within the physical device, as the potential for speaking is in humans, and that it is just a matter of learning how to get it going. The actuaal program—the software—is understood, if, indeed it is thought of at all, as bearing the same sort of relation to computer operations that cake recipes do to cooks: a guide to the energy and manipulative imagination of an essentially autonomous actor. The U.S. Patent Office has justified its refusal to patent software by insisting that a program is a "technique," "a mental process," and/or an "idea." The only kind of program the Patent Office will patent is one that has been "wired-in," built as the core of a special-purpose computer that will perform that function and no other. But if the same program is not embodied in a mechanical device, if it is written as one of a large number of programs, to be entered into a general-purpose computer capable of handling any of them, it is not patentable, for it then becomes an "idea." This reasoning, that programs are to computers what ideas are to human brains, is absurd to those who work with the machines.
The tendency to concentrate on hardware abilities, on the machine's memory and speed, emerged with the first computers. An early MIT research computer, for instance, to which a TV special and a New Yorker column were devoted, was dubbed "The Whirlwind."
That this emphasis arose was natural enough. What computers did and do—manipulate a very carefully defined body of information through a narrow range of arithmetic techniques—is unlikely to be very interesting. But their style, their tirelessness and infallibility, was interesting and the stress laid upon these qualities turned them into a cultural phenomenon. This was true, one speculates, because speed and memory, with freedom from error, are the same features humans conventionally use in identifying what they call intelligence. When someone is referred to as having "brains," it usually means that he is never caught in a mistake. It means that he has a memory that absorbs quickly and voluminously, that he can solve complicated math problems in his head. It certainly means speed; if a person finds himself in the company of those who think consistently faster than he does, that difference is usually taken as one that reflects on his mind as a whole. These qualities are what weigh with those who send for correspondence courses that promise ten ways to increase brain power. And they count no less at higher levels of society. During Robert McNamara's tenure as Secretary of Defense, his many admirers in the press and Congress would often volunteer their observation that his mind was so awe-inspiring as to be almost computerlike.
In retrospect one can see several other reasons why computers were bound to become totems. Decision-makers in a democratic society are forever restive with the convention that their decisions should not appear to be blatantly self-seeking. Now they could use the computer as a kind of Mexican bank—for decisions wherein judgements could appear to have been laundered, or more specifically, bleached, of self-interest and arbitrariness.
This "bleaching" effect can, and often does, allow an increas in arbitrariness. One example: the Board of the National Endowment for the Arts has a number of curators on it; curators have a constant headache with artists complaining about the company which their pictures have been made to keep. The National Endowment accordingly funds studies in which artists are asked near whose pictures they would like their paintings to hang. A matrix analysis is done on the preferences and returned to the exhibitors, who hang the paintings by the numbers—with what aesthetic results I cannot imagine—and then successfully deflect the inevitable outrage of the painters onto the computers.
Both the obsession with hardware and the need for new sources of authority were important triggering off and maintaining the computer craze. Equally important, though, was the fact that the authorities, the computer researchers, were in no position to shoot down public misapprehension as unequivocally, for instance, as a cancer specialist can scotch a faddist cure. There was a misunderstanding between the general public and these scientists that each side, for different reasons, was reluctant to resolve. As ill definied as the word "intelligence" is, in general usage it usually involves pursuing some end for independent, autonomous purposes. The discrete activity, whether it be learning, ordering, remembering, logical thinking, designing, or whatever, often seems less important than this sense that intelligence is master in its own house, that it has free will. Feelings like this are intimately bound up in the every-day habit of assigning responsibility for action to computers, instead of to their programmers. More dramatically, they gave rise to all those fantasies about superbrain coups d'etat, wherein computers "take over for the good of mankind," or plunge nations into war against their will.
The essence of the free-will dilemma is that it seems to be impossible for something to be free and indeterminable as well. It appears to be a contradiction in terms to imagine such a decision-making ability being reduced to a series of predictable, cause-and-effect regularities. But postulating such an ability means postulating something that is, in scientific terms, inherently incomprehensible. This paradox has never been adequately resolved—Karl Popper called it a nightmare— and it is something of a philosophical running sore to those with a scientific temperment. But few groups of scientists must confront this issue more directly than AI researchers. It is understandably repugnant to them to believe that there lies at the core of their chosen subject some impenetrable mystery, some vitalistic, unknowable spirit thing, a "ghost in the machine." They do not believe this; they understand intelligence as an aggregation of enormously complex abilities which interact in ways even more complex but which are both, abilities and interactions, creatures of natural law. "When intelligent machines are constructed," Marvin Minsky, the director of the MIT Artificial Intelligence Lab, once wrote, "we should not be surprised to find them as confused and as stubborn as men on their convictions about mind-matter, consciousness, free will, and the like. A man's or a machine's strength of conviction about such things tells us nothing about the world or, or about the man."
So when the dialogue between the general public and the AI scientists began, it flowed from two quite different sets of understanding about the nature of mind. Concievably the scientists might have sidestepped the whole question by refusing to use the word "intelligence," by saying they found it meaningless and insisted that their work be called by some such term as advanced automation research. But to do so would probably not have worked, and it would have appeared to be a handing over of the term by default to the vitalists. It would have been a tacit admission of a proposition that any scientist, and especially an AI scientist, feels a professional duty to resist. And they have resisted. Marvin Minsky was quoted in Life as calling the brain a "meat computer"; Herb Simon, the most venerable figure in AI research, said that humans were programmed by their genes and environment in the same sense that a computer was programmed. The public tended to hear these remarks in its own way, as statements that computers had free will.
This confusion involved more than a few lines lifted from public speeches; the whole relationship between the public and computer scientists is shot through with it. A far more significant and substantial example occurred in the 1950s when computers first began to "reason."
At least since Aristotle men have struggled to convince themselves that the capacity to reason, to draw up deductive syllogisms, is man's distinguishing glory. Logic (in this strict sense), is useful in only a vanishingly small fraction of the problems encountered in understanding and explaining the world; probably 99 percent of our thinking is analogical, wherein we satisfy ourselves that some useful similarity exists between that which we already know and that which we do not. But analogies proceed half-consciously at best and can always be disputed—is a pretty girl really like a melody? Logic is explicit, entirely conscious, and promises to settle questions once and for all. It does this, by preserving equalities between statements; learning deductive logic is learning how to say nothing that is not a restatement of whatever is given. Once acquiescence is obtained on the initial assumptions, everything else follows inexorably, for, if done properly, what follows and what is given are fundamentally identical statements.
Obviously this imposes severe limitations on the usefulness of the techniques, but when something is demonstrated in this fashion it is proven beyond dispute. Whoever frames such a statement speaks, like Euclid, to all succeeding civilizations. Thus, the central tradition of Western philosophy has been the struggle to reduce as many questions as possible to a short list of innocuous, self-evident first assumptions, and then to elaborate a series of derivations which prove that the philosopher's viewpoints are the only ones possible. Leibnitz, who, among many, sought to establish a "calculus of human knowledge" which could prove or disprove any proposition at all (he blamed his inability to achieve this on inadequate funding), once remarked that if he were successful in his efforts, he would be able, upon the commencement of a dispute, to declare to his unfortunate opposite "'Let us calculate, Sir!' and, by taking pen and pencil, we should settle the question."
The arithmetic manipulations that form the core of computer programs (+, =, ≠) correspond to deductive logic (and, is, not). In 1957 a, prograrn called Logic Theorist proved thirty-eight out of fifty-two theorems from the Principia Mathematica. Two years later the devisers of Logic Theorist, Newell, Shaw, and Simon, wrote another program, this one called the General Problem Solver, which could deduce the answer to a number of classic cocktail-party brain twisters. In 1963 T. G. Evans wrote a program whose performance on certain geometric analogy tests was comparable to that of fifteen-year-old children.
All these programs seemed, to the AI community at the time, extremely exciting. What they mean by success in their field, what they must mean, given the assumptions they hold, is increasing the number of humanlike functions, abilities, and activities that a machine can perform. When a sufficiently large number of the right kinds of abilities can be executed, the machine will be intelligent in every sense. No other comprehensible way of using the term can be imagined. The ability to do logic seemed one of the most important of these activities, and therefore a program which could make deductions appeared to be a giant step. In 1957 Herb Simon himself said, in the flush of triumph that followed upon his having written the General Problem Solver: "There are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until—in the visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied."
In the years that followed a lot of effort was devoted to programs which played chess and checkers, found proofs for theorems in geometry and symbolic logic, solved trigonometric and algebraic identities, composed music and poetry, simulated neuroses. There was even a psychiatrist program, artfully designed to reproduce the evasive quasi-responsiveness that is so distinctive a note in psychoanalysis. When these programs were displayed, AI workers thought of them as achievements in reproducing one more human activity, while the popular understanding was, rather, that the achievement lay in allowing some deep, unitary phenomenon one more means of expressing itself. The two sets of definitions were not only different but mutually antagonistic. However, both sides continued to use the same words, insisting on doing so partially because of that very antagonism, and the misunderstanding that resulted has not been cleared up to this day.
The programs just mentioned were all activities that would naturally occur to a middle-class academic as being intelligent and were an obvious place to begin. There were isolated successes, the occasional discovery of a proof more elegant than that traditionally taught, a checkers-playing program which nearly beat the world champion—and always beat its author, A. L. Samuel. Yet these programs remained laboratory curiosities and in time came to seem trivial and sterile. The early enthusiasm began to look badly misplaced. Whatever abilities the programs had could be produced only in contexts that were extremely narrow and artificial. It required an enormous amount of labor to process and define the techniques and information which formed their input, and this labor had to be repeated, almost always, whenever even the most subtle changes in the problem were made. Whatever else intelligence may be, it surely implies an ability to link up with the real world, to pursue some objective or apply a competence over at least a small range of natural experience. An important example of the failure of computer programs to do this might be the automatic zip-code reader; the postal service is eager to buy a machine that can read zip codes and addresses. Yet, after a decade of well-financed work, the most sophisticated model the service has can handle successfully only 5 percent of the mail. The machine can read ninety different print fonts and fifteen typewriter fonts, but is baffled by mail which does not fall within the narrow limits of size, print contrast, envelope flexibility, and decoration, and it cannot read handwriting at all. To achieve even this much the machines cost $800,000 each with a minimum order of thirty-five units. By contrast the optical character reader used by Amtrak to read tickets (two fonts) costs about $45,000 for a single-item purchase.
Over the last ten years, AI researchers have become increasingly fascinated and impressed by the fluid adaptiveness which allows humans to interact freely with the world and each other. It may seem idiosyncratic to use the word "intelligence" to refer to abilities like vision and hearing and touch-feedback systems; after all, even animals are quite skilled at these. But AI workers reply that these abilities are so many thousands of times more complex and intricate than a theorem-proving program that the claim to the word "intelligence" is even stronger. There is, as well, a widespread feeling that these sensory capabilities are what formed the basis for the evolution of cognition in the first place.
Vision intuitively seems the most important of these senses. Its significance is hinted at by its appearance in the root of words like "clairvoyance," "imagination," and "insight," and it is clearly crucial to advanced automation projects. For instance, in 1971, the Japanese government picked machine vision as the project which offered the most promise of decisively leapfrogging the U. S. computer industry and set up a $180 million eight-year crash program to achieve it.
initial work in vision tended to reflect the belief that thinking was a higher order of activity, properly conducted in the brain, while seeing, a lower order of activity, went on in the retina.
When the retina was finished with its processing, the theory went, then and only then would it hand the results over to the brain, which would interpret them and make the high-level executive decisions appropriate to its status. This seemed a logical approach to the AI researchers and they adopted it, more half-consciously than otherwise. Some devoted themselves to the "lower" ability, which—in this context meant picking out lines from a map of light intensity values. Others worked on the "higher" functions, which were expected to reason out what the lines meant. Both programs were conceived as having, ideally, great generality and independence. The ambition of those working on line-finding programs was to devise a set of powerful, universal procedures that could be routinely applied to any scene. Similarly, the scene-analyst programmers hoped to write a series of programs that would be general-purpose evaluators, generalizers, analogy-makers.
These were the cognitive assumptions; epistemologically the AI researchers, again half-consciously, were "realists." They believed the objects that compose the world are essentially like our perception of them; that by their different ways of reflecting and absorbing light, things announce themselves to the observer, for whom perception is essentially a recording process. The eyes are like windows which open up the mind to a reality totally independent of the observer. This theory, realism, is opposed by a number of philosophical alternatives (the extreme of which is solipsism) which hold that seeing is an active process which imposes identities and qualities on the world about it. The AI workers, however, felt more comfortable with a kind of neo-Lockean empiricism, which assumed that each scene has a knowable, retrievable essence—its "reality"—and that by gradually narrowing the range of possibilities, more and more error will be progressively eliminated until we finally arrive at "the facts."
A shape-identifying vision program was designed to identify and isolate only those points which lie along the "true" edges of the images, discarding those which are produced by dirt, decorations, or texture. Then it would try to combine these points into lines, endeavoring meanwhile to pick out only the significant lines, even if lighting conditions have left them invisible or blocked by overlying objects, while not being distracted by shadows. Finally, with all this done—and it would have to be done flawlessly because false information generated at one level makes the operations of higher levels totally useless—the machine would match a list of the identified features of the object with a list of the possibilities stored in its memory. If it could do so without being confused by viewing angles and distance variations, a correct identification might be made.
Over the last ten years partial solutions to this formidable problem emerged. But it was too hard; the process was too involved. The conviction began to grow in the profession that no animal would see as fluidly and responsively as it now does if it had to wade through anything so tortuous. The world of perception is a world of illusion piled on illusion, and aggregating facts about it in the hope that enough facts would finally point unequivocally to the "truth" began to seem an impractical fancy. To write a "rational" program that would never arrive at any kind of judgment until it had a good reason for doing so was to write a program that would paralyze itself with unrealistically high standards.
A different set of assumptions is now being used in programs which work with essences, expectations which are read into the data if needed. A shape-finding program of this kind would have in its memory a vocabulary of the prototypical, three-dimensional forms which it might reasonably expect to encounter in its pursuit of the immediate task at hand. A minimum of data would trigger off a guess about what was being viewed and then a short list of specific questions, previously programmed in the event that this guess was made, would be asked to confirm it. Once that confirmation is received, the program sees what should be there; for instance a flood of detailed, preprogrammed expectations is triggered off about what the parts of the figure hidden from the viewer should look like.
The flaw in these programs is that they have to be specifically tailored to particular environments; their virtue is that they promise to be more useful within those environments. Facts about shape are not the only ones that can be associated with these prototypical essences. If a program imposes a "typical" bottle shape on, let us say, a perception of "thin neck/sloping shoulders/ straight sides," information about the fragility of glass, the likely nature of the contents, the speed with which it can be filled or emptied can be bound in with it. These facts will all be true of the machine's "idea" of bottle, not necessarily of the specific bottle in front of it, so these programs must also be equipped to handle anomalies. Programs built under the old assumptions had no expectations about the world; nothing surprised them. They applied their procedures routinely and passively to whatever was given to them. This is not true of the newer programs; if something that a new program has decided is a cow turns out not to have an udder, that fact must have been anticipated and have associated with it specific directions on how to test for bulls and horses.
The general idea seems an attractive one, if for no other reason than that human beings appear to work in analogous ways. If someone were to drop a glass and it bounced, we would feel a dramatic reaction within ourselves; something would have been violated. Where else lies the appeal of Sherlock Holmes if not in that he continued to look long after his readers were satisfied with typicality of what they saw, and, being so satisfied, stop looking? What purpose is served by the convention of the topic sentence—usually made redundant by the succeeding text—but as an aid to selecting right grouping of associated expectations?
However, these programs and the approach they embody are quite new, and it is difficult to say how successful they will be. There are a number of people, of whom the best known is Professor Herbert Dreyfus of Berkeley, who believe that they will be no more successful than were those of the sixties. Dreyfus contends that computers will never be able to think, not only because they lack autonomous desire-free will—but because thought itself is not amenable to the step-by-step counting routines upon which digital-computer operations are based. Those who are more optimistic point out that while there were not more than 20 full-time workers in AI in 1963 now there are at least 200; that the hardware has become much more sophisticated; and that a consensus has developed about what the important problems are and how they ought to be attacked.
MIT hopes within five years to have developed an electronic repairman that can assemble, inspect, maintain, and repair electronic equipment. Stanford University has been doing a lot of work on manipulation and coordinating vision and tactile, systems, and is moving rapidly toward automatic building and assembling machines. Natural language comprehension, wherein a human can converse with a computer in everyday English, has been showing especially dramatic progress in recent years and there are some showpiece programs which work slowly but well. A number of private companies, particularly the Xerox Corporation, are increasing their support of their own research programs.
So it is at least possible that, sometime during the 1980s, we will see the gradual introduction of programs, which, whether or not we call them intelligent, will be able to react reasonably to significantly complicated situations. If we are to learn anything at all from the history of computers in America, it ought to be extreme care in predicting what computers will mean to the society and the culture. There are some general observations that might be pertinent. The first is that these programs are extremely complex and therefore expensive. Even the simplest takes man-years to write, and they must be specifically tailored to particular environments. Their introduction will therefore be extremely slow. It is unlikely that any analogue will exist to the payroll programs of the fifties which could flash through whole groups of industries in a single year. Second, if we were underprepared for the first wave of automation, we are, if anything, overprepared for the second. Much of the public believes that computers already possess powers that, even by the most optimistic forecast, they will not have until well into the next century. New achievements are therefore more likely to be greeted with a shrug than with any sense of heightened significance. Third, one cannot be sure to what extent the sheer physical and financial scale of the machines of the fifties contributed to the frenzy that surrounded them, but it seems worth nothing that the price of hardware is falling precipitously, and appears certain to continue to do so. It has been estimated that the entire world stock of computers, with an original purchase price of $25 billion, could be replaced today for one billion dollars. The comparative value of human labor involved in installations is rising correspondingly. Ten years ago programming accounted for one fifth of the cost of an average installation; by the end of this decade it will be four fifths,
"The Reinvention of Privacy" (March 2001)
It used to be that business and technology were considered the enemies of privacy. Not anymore. By Toby Lester
For all these reasons it seems unlikely that these new programs will revive our concern about machines "taking over" to the intensity of the early sixties, though there is one important counterpoint to be made. These programs could enormously increase the surveillance powers of governments. Right now research into face- and speech-recognition programs is proceeding very slowly, but if they are achieved, governments will be able to monitor hundreds of thousands of phone conversations simultaneously, or automatically compile dossiers on the routines of large numbers of its citizens. In such a society one might well feel that machines had indeed taken over.
Ironically, the success of the artificial-intelligence scientists may end in their losing their running battle With the "vitalists." The confusion over machine intelligence arose only because the word sprawls over so many activities. Whether or not one believed that constructing geometric proofs was an intelligent activity in itself or merely expressed an intelligence which fundamentally resided at at some deeper level, one had to believe that it was legitimate to involve the word in the first place. The same assumption can be said to be true of such primitive abilities as thinking fast, or possessing an accurate memory. But it seems clear that, over the long run, when activities become mechanized, they lose status. This is an ancient dynamic, long antedating computers. Before the camera was invented, perfect reproduction of nature was thought a noble objective in painting, if not indeed the only proper end. When the camera was able to make this ideal routinely available, everyone grew bored and went off to do other things (though it might be mentioned, not before both Sam Morse and Nathaniel Hawthorne had written that surely the camera would leave artists with naught but a purely historical life). The telegraph companies inherited none of the romance which attached to the riders of the Pony Express. Routing, the planning of the most cost-effective truck and freight-car routes, was once a respected job that was thought to require judgment, skill, and experience. That function is now done by computers and has been for the last ten years, and I would guess that in all that time not two people in the transportation industries have thought seriously about the computer's showing "skill" and "judgment." Indeed, it seems probable that the computer has had at least a part in the developing conviction expressed most explicitly by, but hardly confined to, the "counterculture," that logical, sequential, cause- and-effect reasoning is not only an undistinguished but even a disreputable ability.
Digital Culture: "A Function Specific to Joy" (April 29, 1998)
Are we ready for computers that know how we feel? By Harvey Blume
Some of the activities that are important to us and our sense of being human could, can, and might be programmed; others cannot. To take the extreme case, there simply is no serious sense in which one can talk about a computer program praying or loving. If it continues to be true that to mechanize an activity is precisely to divest it of its mana, to cause humans to withdraw from it emotionally, then the impact of these programs, at least culturally, will be to refine our ideas of human intelligence, to cause those ideas to recede, or advance, into the subjective, affective, expressive regions of our nature. If this happens, we might lose interest in the whole issue of whether machines can "outthink" man, and the use of the term "intelligence" by AI researchers may come to seem increasingly anachronistic and inappropriate the more successful they are.