The age of artificial intelligence may be very nearly upon us—which is, on one hand, great news. Machines have long helped humans do things better and faster and more safely and affordably.
Except the rise of artificial intelligence is also, leaders in technology continue to remind us, cause for some concern. Failing to take seriously the potential for a world in which smart machines run amok could make artificial intelligence more dangerous to humanity than nuclear weapons, Tesla CEO Elon Musk has said.
Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable— Elon Musk (@elonmusk) August 3, 2014
Bill Gates told Reddit this week that he agrees with Musk. "I am in the camp that is concerned about super intelligence," Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
That's a common refrain: The rise of machines will be okay as long as we manage it well. But what does managing it well even look like? One of the keys may be to build machines that are able to reflect about their own behaviors (and the behaviors of other artificially intelligent machines), as well as understand their connection to the physical world. Because today, models of artificial intelligence represent a kind of Cartesian dualism—the computer mind sees itself as totally separate from the computer body.