Remember when Luke’s running the trench in the Death Star, and he’s about to fire his fateful shot, and at the last minute he decides to turn off the targeting computer and use the Force instead? We romanticize that moment—not just because it represents Luke’s coming into his own as a Jedi, but because to us, the decision to trust an intuition born deep in nature and honed over billions of lifetimes instead of some newfangled tech seems somehow right and good.
The irony, of course, is that in our galaxy, technology is the Force. Increasingly, it’s computers that train our intuition. It’s computers that help us perceive beyond our senses.
This happened relatively early, and vividly, in chess, of all places. Partly that’s because the same people who were involved in the early days of computing happened to also like the game, and partly it’s because chess is such a simple domain, relatively speaking: a two-dimensional grid, a few dozen pieces, a set of rules that can fit on a single page.
The chess board used to be a naked thing: You’d look at it and it wouldn’t tell you anything you didn’t already know. When you finished a game, the board wouldn’t tell you which moves had tipped the odds toward your opponent, or what you should have done differently. Your post-game analysis could only ever be as good as your imagination. Getting better was hard.
Today when you finish a game on your free chess app you can tap an Analysis button and see a move-by-move breakdown of your mistakes. The app will even tell you how bad each mistake was. You can tap on any move to see how alternative lines would have played out. This is possible because the chess engine can consider millions of possible future positions per second, where humans can only consider a handful. It distills the results of that vast counterfactual fanout into an interactive annotated board that can provide concise answers, oracle-like, to the questions you pose it, like “Who’s winning in this position? What’s the best response? What’s generally better here, having a bishop and knight or two knights together?”
Where in the old days you could only know how good you were at chess relative to other players, and only in terms of wins and losses, today it’s possible to evaluate different parts of your game—openings, tactics, the endgame—and compare your performance in fine quantitative detail to a computer model of ideal play. To probe weaknesses, the computer can endlessly feed you specific training situations just on the far edge of your ability.
Studying chess in the era of cheap computation is like being able to slow the Matrix down to bullet-time: You can avoid bad futures because you can actually see them. A chess engine can easily tell you that a given move will be a mistake, and it can show you why, say by pointing to the moment where you’d lose a key piece. By training your own vision against the computer’s this way, players are developing a deeper intuition about the game than had ever before been possible. “Chess education today revolves around learning how to learn from the computer,” writes the economist and blogger Tyler Cowen. “There are many more chess prodigies than ever before, and they mature at a more rapid pace.” Forty years ago, there were only two players with a rating above 2700; there are now 44 of them.
As hardware has gotten exponentially more powerful, the thing that happened to chess has started to become possible in, well, things that aren’t board games. Like basketball. Basketball is a game, too, but it takes place in three dimensions. “Moves” in basketball aren’t discrete decisions that can be expressed in algebraic notation like Nf3 (“Knight moves to the f3 square,” a typical early move in a chess game)—they involve human players flinging and dribbling a ball through space. But technical advances like the miniaturization of cameras, the ability to cheaply store and process of thousands of hours of video, the development of computer vision algorithms that can track moving objects across frames and at different angles, and so on, have allowed us to break basketball down to its essence.
The SportVU system used by the NBA tracks every single player’s movement and the movement of the ball in every game of the season. It turns videos of basketball games into the kind of Xs-and-Os diagrams a coach might draw courtside on a whiteboard. But here, the diagrams are dynamic: They actually move in time, capturing every pass and shot in the game.
An article on Grantland explored how the Toronto Raptors have used the SportVU system to build a sophisticated model of their team’s play. Their model not only detects what kind of play is happening—a pick-and-roll, say—but compares the team’s players in every defensive play with theoretical “ghost players” whose movements minimize their opponent’s expected point value. The team learned, for instance, that the ghost players were “consistently more aggressive on help defense than the real Toronto players”—a finding that translated into actionable coaching advice.
It’s an entirely new kind of vision. We don’t often think of computation that way, as a visual aid, because it’s somewhat difficult to describe what it helps us see. Where telescopes and microscopes show us the very far and the very small, the computer shows us the very much, all at once. It makes time available to the mind and eye. Computation, in that sense, is a kind of compacting of imagination: It helps us generate and explore a zillion scenarios and digest them into a representation that’s easy to play around with.
Forty years ago, one of the most common surgical procedures in North America was called an “exploratory laparotomy,” which was a fancy name for, “Let’s open the patient’s abdomen and see what we can find.” Today, doctors don’t go exploring with a knife—they use things like CT scans, ultrasounds, and MRIs.
Modern medicine has in some sense been built on the back of better ways of seeing. The X-ray let us see bone through skin, which took the guesswork out of treating fractures and let us detect early tumors. CTs and ultrasounds let us see organs, blood vessels, muscles, and other soft tissues in three dimensions, which caused a revolution in diagnostic medicine and made surgery radically more precise and safe.
CT was made possible by the computer, which stitches together a collection of X-rays into a reconstructed 3-D image. But this is still more or less a static enterprise: a CT study is more like a picture than a movie. What if you could do for medicine what we’ve already done for chess and basketball—what if you could somehow use the computer to see not just what’s there, but what could be?
In some specialties, this is already becoming possible. Radiation oncologists, for instance, use accelerated beams of radioactive particles to destroy cancers. It used to be that these beams were targeted somewhat crudely: You’d take a two-dimensional X-ray of your patient and outline the area you wanted to zap (the tumor) and the areas you wanted to avoid (healthy organs). Since X-rays couldn’t show you much in the way of soft tissue, you had to use nearby bones as landmarks.
Today, radiation treatments are planned using software. The doctor identifies tumorous and healthy tissues in slice after slice of a CT scan by drawing on the slices directly, on the computer, as though coloring in a figure in MS Paint. This creates three-dimensional contour maps of the tumor and nearby organs. The software then takes these contours and runs hundreds of thousands of simulated treatments against them, using a model of how radioactive particles will behave in different tissue types—how they’ll be absorbed, how they’re likely to ricochet, and so on—to determine the ideal angle and power settings of the real beam.
The computer, in other words, gives the doctor the ability to see the projected path of different treatments as if playing out possible lines from a chess position. With that kind of vision, they can target radiation precisely. One oncologist told me that patients getting lymphomas zapped in their chests used to be at higher risk of heart attack: radiation would hit, and damage, a branch of the artery coming out of the heart. When software allowed us to visualize radiation treatments on top of CT scans, we learned how to thread the needle, avoiding damage to the heart while still destroying the cancer. “You just couldn’t see these things before,” the oncologist told me. (It also helped that chemotherapy improved, allowing for smaller radiation doses.)
This is going to start happening everywhere. We’ll use computers to explore possible futures, and over time we’ll learn how to see those futures for ourselves, almost to feel them, to the point where it’ll seem to those not in the know that we have command of an arcane force.