Human memory is notoriously unreliable. Even people with the sharpest facial-recognition skills can only remember so much.
It’s tough to quantify how good a person is at remembering. No one really knows how many different faces someone can recall, for example, but various estimates tend to hover in the thousands—based on the number of acquaintances a person might have.
Machines aren’t limited this way. Give the right computer a massive database of faces, and it can process what it sees—then recognize a face it’s told to find—with remarkable speed and precision. This skill is what underpins the enormous promise of facial-recognition software in the 21st century. It’s also what makes contemporary surveillance systems so creepy.
The thing is, machines still have limitations when it comes to facial recognition. And scientists are only just beginning to understand what those constraints are. To begin to figure out how computers are struggling, researchers at the University of Washington created a massive database of faces—they call it MegaFace—and tested a variety of facial recognition algorithms as they scaled up in complexity. The idea was to test the machines on a database that included up to 1 million different images of nearly 700,000 different people—and not just a large database featuring a relatively small number of different faces, more consistent with what’s been used in other research.