Unless IBM's Watson can do more than play Jeopardy!, Garry Kasparov sees it as little more than a complicated toy.
The true test of Watson's significance, Kasparov says, will be whether it can be translated "into something useful, something groundbreaking"—applied in a more meaningful way, beyond the game show.
In the annals of man vs. machine competition (the topic of this month's Atlantic cover story), Kasparov holds the most prominent of historic places. The Russian world chess champion defeated IBM supercomputer Deep Blue in 1996, then lost in a six-game rematch in 1997 that surprised many and revealed a nascent truth: In closed-system contests of raw data computation, computer technology had evolved an edge over the most talented and disciplined human minds. Kasparov accused IBM of cheating in the match and requested a rematch but was denied.
Find below Kasparov's initial take on Watson, offered via e-mail through an aide:
- A convincing victory under strict parameters, and if we stay within those limits Watson can be seen as an incremental advance in how well machines understand human language. But if you put the questions from the show into Google, you also get good answers, even better ones if you simplify the questions. To me, this means Watson is doing good job of breaking language down into points of data it can mine very quickly, and that it does it slightly better than Google does against the entire Internet.
- Much like how computers play chess, reducing the algorithm into "crunchable" elements can simulate the way humans do things in the result even though the computer's method is entirely different. If the result—the chess move, the Jeopardy answer—is all that matters, it's a success. If how the result is achieved matters more, I'm not so sure. For example, Deep Blue had no real impact on chess or science despite the hype surrounding its sporting achievement in defeating me. If Watson's skills can be translated into something useful, something groundbreaking, that is the test. If all it can do is beat humans on a game show Watson is just a passing entertainment akin to the wind-up automata of the 18th century.
- My concern about its utility, and I read they would like it to answer medical questions, is that Watson's performance reminded me of chess computers. They play fantastically well in maybe 90% of positions, but there is a selection of positions they do not understand at all. Worse, by definition they do not understand what they do not understand and so cannot avoid them. A strong human Jeopardy! player, or a human doctor, may get the answer wrong, but he is unlikely to make a huge blunder or category error—at least not without being aware of his own doubts. We are also good at judging our own level of certainty. A computer can simulate this by an artificial confidence measurement, but I would not like to be the patient who discovers the medical equivalent of answering "Toronto" in the "US Cities" category, as Watson did.
- I would not like to downplay the Watson team's achievement, because clearly they did something most did not yet believe possible. And IBM can be lauded for these experiments. I would only like to wait and see if there is anything for Watson beyond Jeopardy!. These contests attract the popular imagination, but it is possible that by defining the goals so narrowly they are aiming too low and thereby limit the possibilities of their creations.