Our machines' computational biases are not the same as our brain's cognitive biases, which is going to be weird
In this case, he submitted himself to a group Q&A by the readers of Freakonomics, and someone asked him about Apple's Siri and how artificial intelligence more generally might reflect human cognitive biases.
The emphasis above is mine. If what he's saying is a little opaque, let me unpack it. Human brains take shortcuts in making decisions. Finding where those shortcuts lead us to dumb places is what his life work has been all about. Artificial intelligences, say, Google, also have to take shortcuts, but they are *not* the same ones that our brains use. So, when an AI ends up in a weird place by taking a shortcut, that bias strikes us as uncannily weird.
Q. With the launch of Siri and a stated aim to be using the data collected to improve the performance of its AI, should we expect these types of quasi-intelligences to develop the same behavioral foibles that we exhibit, or should we expect something completely different? And if something different, would that something be more likely to reflect the old "rational" assumptions of behavior, or some totally other emergent set of biases and quirks based on its own underlying architecture? My money's on emergent weirdness, but then, I don't have a Nobel Prize. -Peter Bennett
A. Emergent weirdness is a good bet. Only deduction is certain. Whenever an inductive short-cut is applied, you can search for cases in which it will fail. It is always useful to ask "What relevant factors are not considered?" and "What irrelevant factors affect the conclusions?" By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.
Get ready, too, because AI bias is going to start replacing human cognitive bias more and more regularly.
This article available online at: