Google's Intelligence Is More Baboon Than Human

A new baboon study shows that they know just about as much about the English language as Google.


If you want to know how the world's biggest artificial intelligence operation works, allow me to direct you to this study about baboons published this week in Science. I'm serious: Google's machine learning methods bear a familial resemblance to the baboon processing of language.

First, let's gloss the study. The researchers gave six baboons a game on tablet computers that they could play in exchange for food rewards. The game, such as it was, offered the baboons four-letter word combinations and asked them to pick one of two symbols. After a mere six weeks of training, the baboons could tell an English word they'd never seen before (e.g. hope) from a non-English word scramble (e.g. tekl) 75 percent of the time, much greater than chance. But, of course, they couldn't actually read the words and know what they meant. They could spot "feet" as English, but had no idea that the word referred to the their appendages. They did not even know what the symbols they were choosing meant, only that some selections led to food while others did not.

How'd they do it?

"Grainger thinks that the baboons learned to tell the real words from the fakes by using the frequencies of letter combinations within them. They learned which combinations were most likely to be found in real words, and made their choices accordingly," science blogger Ed Yong explains. "They had gleaned the stats of English, without any knowledge of the language itself."

That's my emphasis because it's really important and it's where the Google -- and machine learning -- connection comes in. When Google translates between English and Spanish, the software doing the work knows only the statistical correlations between human-translated texts. It's using only the statistics of language, rather than its symbolic meaning, to complete its task.

That's one reason we have such a hard time understanding what Google's power is like. It is not fundamentally human nor is it fundamentally superior. It's just different. So the next time you imagine the all-powerful Google, do not imagine HAL or Skynet, imagine this guy: