Why Computers Will Never Replace Us

Even if machines do take over the world, economic theory suggests it will pay for them to keep humans around


Illusive Photography/Flickr

A reader comments in a New York Times blog on the announcement of a radically new kind chip with a "neurosynaptic core" inspired more closely than previous semiconductors by the human brain and under development by IBM with funding from the Defense Advanced Research Projects agency and a consortium of research universities. Shades of the Sputnik era! "JR" writes from New York:

I believe we're 40-50 years off from most humans being phased out, with most every task automated. This ridiculous line of thinking that keeps harping on how technology always creates new jobs is coming to its end. Just because something was true in the past does not mean it will always be true in the future. That was mostly based on the fact that technology was still trying to keep pace with humans, but now there is an exponential explosion that seems to be growing faster and faster, and much of the technology is cutting humans out of the equation, simulating skills that were once strictly the purview of humans.

Maybe 300 years from now humans will not be needed at all....

Let's say JR is right and computers will eventually be able to surpass people in everything, even so-called right brain, emotional processing. Will people be obsolete? I doubt it. The economic theory of comparative advantage explains why. Assuming there will still be people, even if the computers are running everything, it will pay for them to let people do what they are relatively better at. There's likely to be a higher opportunity cost for computers to do more intuitive analysis for which human brain-body system has evolved and concentrate on tasks at which their abilities are an even high-multiple than people's. In the case of computers and people, as I suggested about IBM's Watson and Jeopardy! there will always be elements of tacit knowledge and common sense that will be extremely expensive to achieve electronically. In fact, as impressive as Watson's replies were, it missed a question that was relatively easy for human common sense. So even if, for example, computers surpass physicians on diagnostic reasoning, it will be cheaper, more effective, and safer to have their judgment double-checked by a real doctor. One possible result of the growth of computer intelligence might be, in fact, that human professionals will spend relatively more time developing the intuitive side of their work. Which was one of the points of the computing pioneer Norbert Wiener in the 1950s.

Presented by

Edward Tenner is a historian of technology and culture, and an affiliate of the Center for Arts and Cultural Policy at Princeton's Woodrow Wilson School. He was a founding advisor of Smithsonian's Lemelson Center.

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register with Disqus.

Please note that The Atlantic's account system is separate from our commenting system. To log in or register with The Atlantic, use the Sign In button at the top of every page.

blog comments powered by Disqus


How a Psychedelic Masterpiece Is Made

A short documentary about Bruce Riley, an artist who paints abstract wonders with poured resin


Why Is Google Making Skin?

Hidden away on Google’s campus, doctors are changing the way people think about health.


How to Build a Tornado

A Canadian inventor believes his tornado machine could solve the world's energy crisis.


A New York City Minute, Frozen in Time

This short film takes you on a whirling tour of the Big Apple


What Happened to the Milky Way?

Light pollution has taken away our ability to see the stars. Can we save the night sky?

More in Technology

From This Author

Just In