Humans have a tendency to see faces where there are none. So do computers. Are they more like us in their flaws?
This rocky hill in Ebihens, France, is, well, just that -- a rocky hill in Ebihens, France. But to pretty much any human observer, the assemblage of meaningless angles takes on a familiar appearance, that of a human face in profile. It has a distinct nose, eyes, lips, and chin, capped off with some foliage as hair. From the perspective pictured above, it's impossible not to see a man in a mountain.
This is an example of a phenomenon known as pareidolia, the human tendency to read significance into random or vague stimuli (both visual and auditory). The term comes from the Greek words "para" (παρά), meaning beside or beyond, and "eidolon" (εἴδωλον), meaning form or image. Though animals or plants can "appear" in clouds and human speech can do the same in static noise, the appearance of a face where there is none is perhaps the most common variant of pareidolia (this includes the subgenre of spotting Jesus or Mary in anything from toast to a crab).
Pareidolia was once thought of as a symptom of psychosis, but is now recognized as a normal, human tendency. Carl Sagan theorized that hyper facial perception stems from an evolutionary need to recognize -- often quickly -- faces. He wrote in his 1995 book, The Demon-Haunted World, "As soon as the infant can see, it recognizes faces, and we now know that this skill is hardwired in our brains. Those infants who a million years ago were unable to recognize a face smiled back less, were less likely to win the hearts of their parents, and less likely to prosper."
Humans are not alone in their quest to "see" human faces in the sea of visual cues that surrounds them. For decades, scientists have been training computers to do the same. And, like humans, computers display pareidolia.
Though there is something basely human about the tendency to see faces in the non-human shapes around us, to anthropomorphize odd pieces of hardware or rocks on a hillside, that computers see humans where there are none should not be all too surprising. Facial-recognition software is a tough technological feat, and in the process, computers are bound to come up with false positives. Does this make the computers more like us? Have they taken on our most human cognitive errors? In a superficial sense, yes, computers do make errors that are similar to pareidolia, and this seems very human. But as you look into these computer false-positives a bit more, you find a different story.
In an awesome little creative trick, New York University researcher Greg Borenstein applied open-source software FaceTracker to a Flickr pool of examples called Hello Little Fella. In some instances, FaceTracker found a face just where you or I would:
Like a human, the computer has found a false-positive. That humans and computers share some instances of pareidolia seems to underscore the human-like nature of those computers, brought about by their human-led training. In that sense, a computers' errors make the computers seem somehow more human.
But maybe the reason a computer "sees" a face in that key is very simple: Things around us do sometimes actually have the shapes that constitute a face. How can we say this is pareidolia, a strange phenomenon that is supposedly the byproduct of millions of years of evolution, and not just the basic truth that sometimes shapes do look like things they are not?
A project from Phil McCarthy called Pareidoloop pushes us to think about these questions. By combining random-polygon-generation software and facial-recognition software, McCarthy's program builds its own series of randomly generated faces. Out of layers upon layers of mish-mashed shapes, the software "recognizes" the faces, and the fine tunes them into human likenesses. (McCarthy notes that a lot of them kind of resemble old pictures of Einstein.)
The computer is "seeing" faces where there are just random shapes! But wouldn't anyone? The results are clearly faces, so much so that recognizing them as such cannot be labeled pareidolia any more so than recognizing faces in a painting of a face is pareidolia. Where is that line? If it's pareidolia to see a face in the two windows and door of a house, why not in a sketch of two eyes and a nose? Faces are, after all, just a series of well arranged polygons. We'll see them in the world around us because sometimes, inevitably, shapes will be arranged in the formation of two eyes, a nose, and a mouth. How can we identify pareidolia in a way that is distinct from the "accurate" identification of an artistic representation of a face? How can we say pareidolia is a phenomenon of the human mind at all?
Borenstein's work with computers provides a way out of this, answering a most human question by looking at the idiosyncrasies of algorithms. He writes:
Facial recognition techniques give computers their own flavor of pareidolia. In addition to responding to actual human faces, facial recognition systems, just like the human vision system, sometimes produce false positives, latching onto some set of features in the image as matching their model of a face. Rather than the millions of years of evolution that shapes human vision, their pareidolia is based on the details of their algorithms and the vicissitudes of the training data they've been exposed to.
Their pareidolia is different from ours. Different things trigger it.
In Borenstein's sample, FaceTracker found faces in only seven percent of the images, meaning that even though the program did display this human tendency, it did so at a rate much lower than the human judges who created the Flickr pool. That said, we do not know how many false positives the program would spot in the world around us that humans didn't include in the pool, though we get a sense from the "mistakes" the program made, sometimes missing the obvious "face" and spotting another. Such mistakes are useful for seeing just how particularly human pareidolia is in the first place. Here's an example:
The computer's false-positive is, as any human could tell you, wrong -- the wrong wrong answer, selecting B where a human would say A, and the answer is actually D, for none of the above. The mistakes of a computer are so other, so less-than-human, that we can see that pareidolia is not the recognition of just any old assemblage of eyes, nose, and a mouth, but specific ones, ones that must come from within the human observer, that are not inherently available in the shapes as they appear in the world.
And it shows us something more. Although a computer may, like a human, find false positives in the world around it, its sensibility for what makes a set of polygons a face is still, somehow, off. On its surface, a computer's a tendency to pareidolia, this very human phenomenon, seems human-like. In a strange echo of the tendency to see human faces in random shapes, we see our reflection in a machine's cognition -- a sort of pareidolia of the mind. We look at a computer's pareidolia and think, We make those very same mistakes!
But, in fact, we don't. The mistakes are different. A computer's flaws are still very machine -- and ours are very human.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.