Ideas: Technology July/August 2009

Get Smarter

Pandemics. Global warming. Food shortages. No more fossil fuels. What are humans to do? The same thing the species has done before: evolve to meet the challenge. But this time we don’t have to rely on natural evolution to make us smart enough to survive. We can do it ourselves, right now, by harnessing technology and pharmacology to boost our intelligence. Is Google actually making us smarter?
More

When people hear the phrase intelligence augmentation, they tend to envision people with computer chips plugged into their brains, or a genetically engineered race of post-human super-geniuses. Neither of these visions is likely to be realized, for reasons familiar to any Best Buy shopper. In a world of on­going technological acceleration, today’s cutting-edge brain implant would be tomorrow’s obsolete junk—and good luck if the protocols change or you’re on the wrong side of a “format war” (anyone want a Betamax implant?). And then there’s the question of stability: Would you want a chip in your head made by the same folks that made your cell phone, or your PC?

Likewise, the safe modification of human genetics is still years away. And even after genetic modification of adult neurobiology becomes possible, the science will remain in flux; our understanding of how augmentation works, and what kinds of genetic modifications are possible, would still change rapidly. As with digital implants, the brain modification you might undergo one week could become obsolete the next. Who would want a 2025-vintage brain when you’re competing against hotshots with Model 2026?

Yet in one sense, the age of the cyborg and the super-genius has already arrived. It just involves external information and communication devices instead of implants and genetic modification. The bioethicist James Hughes of Trinity College refers to all of this as “exo­cortical technology,” but you can just think of it as “stuff you already own.” Increasingly, we buttress our cognitive functions with our computing systems, no matter that the connections are mediated by simple typing and pointing. These tools enable our brains to do things that would once have been almost unimaginable:

• powerful simulations and massive data sets allow physicists to visualize, understand, and debate models of an 11‑dimension universe;

• real-time data from satellites, global environmental databases, and high-resolution models allow geophysicists to recognize the subtle signs of long-term changes to the planet;

• cross-connected scheduling systems allow anyone to assemble, with a few clicks, a complex, multimodal travel itinerary that would have taken a human travel agent days to create.

If that last example sounds prosaic, it simply reflects how embedded these kinds of augmentation have become. Not much more than a decade ago, such a tool was outrageously impressive—and it destroyed the travel-agent industry.

That industry won’t be the last one to go. Any occupation requiring pattern-matching and the ability to find obscure connections will quickly morph from the domain of experts to that of ordinary people whose intelligence has been augmented by cheap digital tools. Humans won’t be taken out of the loop—in fact, many, many more humans will have the capacity to do something that was once limited to a hermetic priesthood. Intelligence augmentation decreases the need for specialization and increases participatory complexity.

As the digital systems we rely upon become faster, more sophisticated, and (with the usual hiccups) more capable, we’re becoming more sophisticated and capable too. It’s a form of co-evolution: we learn to adapt our thinking and expectations to these digital systems, even as the system designs become more complex and powerful to meet more of our needs—and eventually come to adapt to us.

Consider the Twitter phenomenon, which went from nearly invisible to nearly ubiquitous (at least among the online crowd) in early 2007. During busy periods, the user can easily be overwhelmed by the volume of incoming messages, most of which are of only passing interest. But there is a tiny minority of truly valuable posts. (Sometimes they have extreme value, as they did during the October 2007 wildfires in California and the November 2008 terrorist attacks in Mumbai.) At present, however, finding the most-useful bits requires wading through messages like “My kitty sneezed!” and “I hate this taco!”

But imagine if social tools like Twitter had a way to learn what kinds of messages you pay attention to, and which ones you discard. Over time, the messages that you don’t really care about might start to fade in the display, while the ones that you do want to see could get brighter. Such attention filters—or focus assistants—are likely to become important parts of how we handle our daily lives. We’ll move from a world of “continuous partial attention” to one we might call “continuous augmented awareness.”

As processor power increases, tools like Twitter may be able to draw on the complex simulations and massive data sets that have unleashed a revolution in science. They could become individualized systems that augment our capacity for planning and foresight, letting us play “what-if” with our life choices: where to live, what to study, maybe even where to go for dinner. Initially crude and clumsy, such a system would get better with more data and more experience; just as important, we’d get better at asking questions. These systems, perhaps linked to the cameras and microphones in our mobile devices, would eventually be able to pay attention to what we’re doing, and to our habits and language quirks, and learn to interpret our sometimes ambiguous desires. With enough time and complexity, they would be able to make useful suggestions without explicit prompting.

And such systems won’t be working for us alone. Intelligence has a strong social component; for example, we already provide crude cooperative information-filtering for each other. In time, our interactions through the use of such intimate technologies could dovetail with our use of collaborative knowledge systems (such as Wikipedia), to help us not just to build better data sets, but to filter them with greater precision. As our capacity to provide that filter gets faster and richer, it increasingly becomes something akin to collaborative intuition—in which everyone is effectively augmenting everyone else.

Jump to comments
Presented by
Get Today's Top Stories in Your Inbox (preview)

Sad Desk Lunch: Is This How You Want to Die?

How to avoid working through lunch, and diseases related to social isolation.


Elsewhere on the web

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

Where Time Comes From

The clocks that coordinate your cellphone, GPS, and more

Video

Computer Vision Syndrome and You

Save your eyes. Take breaks.

Video

What Happens in 60 Seconds

Quantifying human activity around the world

Writers

Up
Down

More in Technology

More back issues, Sept 1995 to present.

Just In