Although artificial intelligence (AI) has experienced a number of “springs” and “winters” in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts.

All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity.[i] Regarding the latter topic, Elon Musk has described AI as “our biggest existential threat.” Stephen Hawking warned that “The development of full artificial intelligence could spell the end of the human race.” In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological “singularity” at which point the general cognitive abilities of computers exceed those of humans.[ii]

Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to “outthink” us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious. Furthermore, it distracts attention from a less speculative topic in need of deeper attention than it typically receives: the ways in which machine intelligence and human intelligence complement one another.

While AI has made a dramatic comeback in the past five years, we believe that another, equally venerable concept is long overdue for a comeback of its own: intelligence augmentation. With intelligence augmentation, the ultimate goal is not building machines that think like humans but designing machines that help humans think better.

Two factors indicate that today’s AI is not similar to human psychology. First, general intelligence is quantified by the so-called “g factor,” which measures the degree to which one type of cognitive ability (say, learning a foreign language) is associated with other cognitive abilities (say, mathematical ability). This is not characteristic of today’s AI applications: An algorithm designed to drive a car would be useless at detecting a face in a crowd or guiding a domestic robot assistant.

Second, and more fundamentally, current manifestations of AI have a narrow type of “intelligence” in that they solve problems and achieve goals in ways that do not involve implementing human psychology or brain science. Rather, today’s AI uses machine learning: the process of fitting highly complex and powerful—but typically uninterpretable—statistical models to massive amounts of data.

AI algorithms enjoy many obvious advantages over the human mind. We humans must settle for solutions that “satisfice” rather than optimize because our memory and reasoning ability are limited. In contrast, computers do not get tired; they make consistent decisions before and after lunchtime; they can process decades’ worth of legal cases, medical journal articles, or accounting regulations with minimal effort; and they can evaluate five hundred predictive factors far more accurately than unaided human judgment can evaluate five.

However, algorithms are reliable only to the extent that the data used to train them are sufficiently complete and representative of the environment in which they are to be deployed. When this condition is not met, all bets are off. When routine tasks can be encoded in big data, it is a safe bet that algorithms can be built to perform them better than humans can. But such algorithms will lack the conceptual understanding and commonsense reasoning needed to evaluate novel situations. They can make inferences from structured hypotheses but lack the intuition to prioritize which hypothesis to test in the first place.

Just as humans need algorithms to avoid decision traps, the inherent limitations of big data imply the need for human judgment to keep mission-critical algorithms in check. Together, these points imply that the case for human-computer symbiosis is stronger than ever. In particular, one type of mental operation that cannot (and must not) be outsourced to algorithms is reasoning about fairness, societal acceptability, and morality. The naïve view that algorithms are “fair” and “objective” simply because they use hard data must be tempered with recognition of the need for oversight.

In a nutshell, humans plus computers plus a better process for working with algorithms will yield better results than either the most talented humans or the most advanced algorithms working in isolation. The need to design those better processes for human-computer collaboration deserves more attention than it typically gets in discussions of data science or artificial intelligence.

Read the article on Deloitte University Press: Cognitive Collaboration: Why Humans and Computers Think Better Together


[i] Regarding technological unemployment, a recent World Economic Forum report predicted that the next four years will see more than 5 million jobs lost to AI-fueled automation and robotics. See World Economic Forum, The future of jobs: Employment, skills and workforce strategy for the fourth industrial revolution, January 2016, http://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf.

[ii] Regarding Musk and Hawking on AI as an existential threat, see “Elon Musk: Artificial intelligence is our biggest existential threat,” Guardian, October 27, 2014, and “Stephen Hawking warns artificial intelligence could end mankind,” BBC News, December 2, 2014. In his book Superintelligence (Oxford University Press, 2014), Nick Bostrom entertains a variety of speculative scenarios about the emergence of “superintelligence,” which he defines as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.