u_topn picture
rub_dc picture
Atlantic Unbound Sidebar

Previously in Digital Culture:

"Coming of Age in Cyberspace," by David S. Bennahum (October 28, 1998)
In the bedrooms, the arcades, and the high school computer rooms of the 1980s, kids of the Atari generation invented today's digital culture. An excerpt from David Bennahum's memoir, Extra Life.

"Portable Musings," by Sven Birkerts (September 10, 1998)
The book is the network, the network is knowledge, and soon you'll be able to curl up in bed with all of it. This calls for some serious rumination.

"The Invisible World Order," by Andrew Piper (July 29, 1998)
If digital technology is to serve humanity (and not the other way around), we'll have to come to terms with the database and all that it implies.

"The Right Mix," by Ralph Lombreglia (June 4, 1998)
Digital technology has made the private recording studio itself into a new kind of musical instrument.

"A Function Specific to Joy," by Harvey Blume (April 29, 1998)
Are we ready for computers that know how we feel?

More on Technology and Digital Culture in Atlantic Unbound and The Atlantic Monthly.

Join the conversation in the Technology & Digital Culture conference of Post & Riposte.
The Digital Philosopher
Can robotics shed light on the human mind? On evolution? Daniel Dennett -- whose work unites neuroscience, computer science, and evolutionary biology -- has some provocative answers. Is he on to something, or just chasing the zeitgeist?

by Harvey Blume

December 9, 1998

Back when I was a student of philosophy, in the late sixties, it was customary to divide the discipline into two schools: "analytic" and "continental." Continental philosophers typically built up large edifices of meaning. Analytic philosophers broke large systems down, scrutinizing every brick. Continental philosophers at times overreached, acting as though thought alone were capable of storming the gates of heaven and hell. Analytic philosophers were at times intellectually stingy. If continental philosophy could sound like poetry or music, analytic philosophy seemed anxious to sound like science. Georg Wilhelm Friedrich Hegel, who saw every aspect of history, politics, religion, and art as a moment in the unfolding of the World Spirit, might be nominated to be the standard bearer for continental philosophy. Ludwig Wittgenstein might be picked, on behalf of analytic philosophy, to deflate Hegel's overflowing theses and antitheses. Wittgenstein, after all, is well known for writing, in Tractatus Logico-Philosophicus (1921), "What we cannot speak about we must pass over in silence."

A Conversation
with Daniel Dennett

Harvey Blume interviews the philosopher who never met a robot he didn't like.

The work of the philosopher Daniel Dennett, who heads the Center for Cognitive Studies at Tufts University, points not only to an array of contemporary issues but also to the continental-analytic fault line that runs through philosophy itself. When I started reading Dennett recently, I thought at first that I was dealing with an analytic philosopher who used artificial intelligence and computer science to pare philosophical problems down to manageable size. That's true as far as it goes -- it's tempting to say that Dennett has never met a robot he didn't like, and that what he likes most about them is that they are philosophical experiments. Instead of arguing interminably about how a mind works, Dennett believes it makes sense to build one, however rudimentary, and set it loose to see what it can do. If you're staging a Turing Test, in order to see if a computer can fool humans into thinking it is truly intelligent, Dennett would be exactly the philosopher to sit on the board of judges -- as he has several times. If you're designing a state-of-the-art robot in order to see how it negotiates the real world (or some subset thereof), Dennett would be the man for your team -- and not surprisingly he does have ties to the artificial intelligence (AI) community, and is invited to go where other philosophers are not encouraged or have no wish to go. He works closely, for example, with Rod Brooks, the head of MIT's AI Lab, on the design of Cog the "cognitive robot."

cogrobot picture
Cog the "cognitive robot."

As posed by Alan Turing, the question of machine intelligence has become a central theme of our time -- and here, as elsewhere, Dennett brings analytic rigor to bear. To the question of whether machines can attain high-order intelligence, Dennett makes this provocative answer: "The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves."

"Computers keep you honest in a way that philosophers have been hankering after for a long time. Computers force you to get clear about things that it's important to get clear about. AI is really a new and better way of doing certain sorts of philosophy."
--Daniel Dennett, in an interview with Harvey Blume.

This is part of Dennett's campaign to overcome the mind-body split bequeathed to us by Descartes, who identified his existence with his self-consciousness (his Cogito) and believed that the thinking portion of the self was attached almost accidentally to the body. Like many in cognitive science, Dennett wants to show that mind and matter are not necessarily opposed. Mind is not made of different stuff than body -- not if body is understood to be an enormously complex information-processing system of which the brain is a part. And if that is so, it's not so obvious that man-made information-processing machines are incapable of breaking into self-awareness. If you do not acknowledge a divine spark of some sort (whether called the soul, an élan vital, or, to use Dennett's term, a "sky-hook"), how can you be sure that life, cooked up over eons in the laboratory of nature, is different -- fundamentally, rather than by degree of refinement -- from the models produced in an AI lab?

Dennett's refusal to allow a basic distinction between human and machine intelligence has earned him his enemies. He and John Searle, of the University of California, have rumbled up and down the philosophical block for years over the question of whether computers can truly experience themselves (or truly experience anything at all, for that matter, which Searle denies they can do). Dennett has also cited unexpected precursors to his position: at a time when everyone else seems to be bashing Freud, Dennett has nice things to say about him. For Dennett (as for the AI pioneer Marvin Minsky, in The Society of Mind), Freud was ahead of his time in showing how the ego stole its precious self-awareness from the activities of innumerable processes that are anything but self-aware -- in other words, from the unconscious. Freud's unconscious becomes a placeholder for neural networking, massively distributed parallel processing, or some other trick of wiring that will one day allow Cog or one of its kin to be launched mindfully into the world.

Dennett is a skillful writer who has probed mind-body-machine connections in his books Brainstorms: Philosophical Essays on Mind and Psychology (1978), Consciousness Explained (1991), and Brainchildren: Essays on Designing Minds (1998). When I met with him recently in his office at Tufts, he was quick to acknowledge how difficult it is to talk about the mind these days without using metaphors drawn from computer science. In his view, this is just fine. "Taking on new concepts," he said, "new ways of thinking about things, so that you suddenly open up new model spaces to explore -- that's great, but you are also tying your hands when you do that." He continued, "Now, it's very important that you tie your hands. Working under constraint is a necessary condition for really important inventions. All the great art of the Renaissance was done under the constraint that it had to be in the service of Christian iconography. Can you make great art under those circumstances? You sure can. Would they have made greater art if they had been free bohemians instead of coddled slaves of bishops and dukes? No, I don't think so."

"Does that make us coddled slaves of the computer?" I asked.

"Sure," he replied.

If Dennett stopped right there, at the relationship between human (coddled and enslaved or not) and machine, he would be among a select group of thinkers who keep the traffic flowing from brain science to computer science and back again. But Dennett takes the argument a few steps further, arriving at a synthesis whose sheer scope makes me mutter "Hegel" under my breath. Dennett may have begun as an analytic philosopher using AI to clarify problems of philosophy, but when he puts mind and brain in the context of evolution, it seems obvious that he has matured into something else again.

In a footnote in his book Darwin's Dangerous Idea: Evolution and the Meanings of Life (1995), Dennett points out that Charles Babbage (the mathematician and early computer pioneer) and Charles Darwin attended the same London parties, probably chewed the same mutton, and quite possibly discussed some of the notions that later became so hugely influential in evolutionary theory and computer science. The meeting of Darwin and Babbage brought a central idea of Darwin's Dangerous Idea alive for me -- namely, that evolution and computers are driven by similar processes that are familiar, at least in part, to any software engineer. You write small pieces of dumb code that work with other simple pieces of code in order to produce systems of greater complexity, which in turn interact with other complex systems in order to give higher degrees of functionality, and so on, until you wind up with a program that is smart -- or, at least, smart enough to do something that needs doing. Finally, you get operating systems, you get an Internet -- or, depending on your raw materials and the time allotted, you get DNA, mammals, and self-awareness.

"Turing shows that if a computer can add, subtract, multiply, and divide, and if it can tell the difference between zero and one, it can do anything. You can take that set of mindless abilities and build them up into structures of indefinite discriminative power, indefinite discerning power, indefinite reflective power. You can make a whole mind ... you can get ideas to think for themselves."
--Daniel Dennett, in an interview with Harvey Blume.

Of course, in the case of evolution there is no software engineer; there are only the bits of genetic code worked by natural selection over millennia into myriad forms of life. Darwin's Dangerous Idea argues that evolution is not a feat of pure thought or of magic or of brilliant planning but simply of engineering over time -- except that it's a case of engineering minus any particular engineer, design minus a designer. In a sense, you have a synthesis as broad as any Hegel attempted, but instead of the World Spirit you have dumb processes of evolution that all on their own suffice to bring about animation, self-replication, intelligence, and the rest. Does evolution obey the dictates of some presiding genius? Does it require a bit of divine guidance? No. Is it nevertheless sensible to think of it as possessing an intelligence that can be usefully compared to that of thermostats and minds? Yes.

When I suggested to Dennett that his fusion of Cog, the Cogito, and Darwin involved three of the dominant motifs of our day -- computers, the human brain, and natural selection -- I wanted to use the image of tectonic plates coming together. Dennett put it somewhat differently. He knows he's done something right, because he entertains, as he says, "a vivid sense of the alternative": "I've seen it when people have a theory and it starts to go sour -- they keep having to do ad hoc fixes, plugging one leak after another. I know what that feels like. And I don't feel it. On the contrary, things keep falling into place. Very few leaks to repair."

Some would disagree about the size and significance of the leaks, or about the integrity of the whole enterprise. John Searle would argue that there is a profound difference between computer algorithms, however sophisticated, and human thought -- a difference of kind that Dennett has seen fit to ignore. Stephen Jay Gould, one of Dennett's harsher critics, would say that any comparison between the logic of computers and the contingencies of natural selection is bound to be forced, and any system that relies on such a comparison is doomed to come apart under stress. Dennett, who still calls himself an analytic philosopher, comes under the kind of fire continental philosophers often had to take from their analytic peers -- hold on, slow down, that center does not hold, things are different, more disparate, than they appear.

Disparateness, difference, fragmentation, discontinuity -- these are familiar postmodern verities. In some ways Dennett reinforces them. The self as he pictures it is much more a collection, an ensemble, a neuro-environment- cum-information-system than a unified entity. His writings are full of devices to get under the hood of self-consciousness and examine possible ways a Cogito might be engineered. Thus his interest in Multiple Personality Disorder -- and in the way the brain can generate not just a self but selves. But similarities between Dennett's thinking and postmodernism are of secondary importance. It's the differences that will count. While so much talk has been devoted to postmodernism, the fields of neuroscience, computer science, and evolutionary biology have been gaining explanatory power, increasing their hold on the imagination, and imposing their constraints on our thought. In linking these disciplines and smoothing away rough spots at the joints, Dennett may well be proposing a sort of overrarching system that is suited to the next century. It's a neuro-cyber-Darwinian synthesis that may just mean it's time to break out the guitars and sing, "Roll over Georg Hegel and tell Friedrich Nietzsche the news."

Next page ... A Conversation With Daniel Dennett

Join the conversation in the Technology & Digital Culture conference of Post & Riposte.

More on Technology and Digital Culture in Atlantic Unbound and The Atlantic Monthly.

Harvey Blume, a writer living in Cambridge, Massachusetts, is a frequent contributor to Atlantic Unbound.

Copyright © 1998 by The Atlantic Monthly Company. All rights reserved.
Cover Atlantic Unbound The Atlantic Monthly Post & Riposte Atlantic Store Search