Minutes later, Yejin Choi saw Marcus’s snarky tweet. The timing was awkward. Within the hour, Choi was scheduled to give a talk at a prominent AI conference on her latest research project: a system, nicknamed COMET, that was designed to use an earlier version of GPT-2 to perform commonsense reasoning.
Read: How a pioneer of machine learning became one of its sharpest critics
Quickly, Choi—a senior research manager at the Allen Institute for AI in Seattle, who describes herself as an “adventurer at heart”—fed COMET the same prompt Marcus had used (with its wording slightly modified to match COMET’s input format):
Gary stacks kindling and logs and drops some matches.
COMET generated 10 inferences about why Gary might be dropping the matches. Not all of the responses made sense, but the first two did: He “wanted to start a fire” or “to make a fire.” Choi tweeted the results in reply to Marcus and strode up to the podium to include them in her presentation. “It seemed only appropriate,” she said.
Common sense has been called the “dark matter of AI”—both essential and frustratingly elusive. That’s because common sense consists of implicit information—the broad (and broadly shared) set of unwritten assumptions and rules of thumb that humans automatically use to make sense of the world. For example, consider the following scenario:
A man went to a restaurant. He ordered a steak. He left a big tip.
If you were asked what he ate, the answer—steak—comes effortlessly. But nowhere in that little scene is it ever stated that the man actually ate anything. When Ray Mooney, the director of the Artificial Intelligence Laboratory at the University of Texas at Austin, pointed this out after giving me the same pop quiz, I didn’t believe him at first. “People don’t even realize that they’re doing this,” he said. Common sense lets us read between the lines; we don’t need to be explicitly told that food is typically eaten in restaurants after people order and before they leave a tip.
Computers do. It’s no wonder that commonsense reasoning emerged as a primary concern of AI research in 1958 (in a paper titled “Programs With Common Sense”), not long after the field of AI was born. “In general, you can’t do natural-language understanding or vision or planning without it,” said Ernest Davis, a computer scientist at New York University who has studied common sense in AI since the 1980s.
Still, progress has been infamously slow. At first, researchers tried to translate common sense into the language of computers: logic. They surmised that if all the unwritten rules of human common sense could be written down, computers should be able to use them to reason with in the same way that they do arithmetic. This symbolic approach, which came to be known as “good old-fashioned artificial intelligence” (or GOFAI), enabled some early successes, but its handcrafted approach didn’t scale. “The amount of knowledge which can be conveniently represented in the formalisms of logic is kind of limited in principle,” said Michael Witbrock, an AI researcher at the University of Auckland in New Zealand. “It turned out to be a truly overwhelming task.”