“Everything a computer ‘sees’ is based on what it ‘knows’... depending on what you mean by ‘sees,’” Emily Pittore, a software engineer at iRobot, wrote to me in an email. “I use scare quotes because I hesitate to apply the language of human cognition to computers too liberally.”
“If you mean ‘sees’ as ‘optical input,’ then computers always see the same thing,” she said. In other words, machines ignore minor aesthetic blips and sensor noise, while “humans have a much more complicated sensor—eyeballs and a brain,” she said.
Visual processing among humans is also heavily influenced by what they already know—but what they actually see or perceive can vary dramatically, even when the input is the same. That’s according to a study, published last month in the Journal of Experimental Psychology: Human Perception and Performance, based on findings from researchers at Johns Hopkins. The researchers conducted a series of experiments to figure out the extent to which prior knowledge of the Arabic alphabet would affect how different people perceived Arabic letters.
They found that the same letters look different to people, depending on whether they can read Arabic. And though they focused on letters for their assessment, the researchers said their findings would apply to anything—objects, photographs, illustrations, and so on. The overarching takeaway was this: What you already know profoundly affects how you see. Which sounds intuitive, right? But these findings are more nuanced than they may seem.
“We’re not just saying, ‘Oh, you’re an expert, so you see things differently,’” said Robert Wiley, a graduate student in cognitive science at Johns Hopkins and the study’s lead author. “The subtle point is that it goes beyond your explicit knowledge to actually change your visual system. These are things we don’t have conscious access to.”
Which is why humans can’t really unlearn things neatly. Because we don’t know how to untangle what we see and how we see it in the first place. You might forget a fact or lose a skill you once had, but there’s no way to map—and therefore no way to deliberately refine—the ways in which exposure to certain inputs has altered your perceptions.
Machines, however, can unlearn.
In fact, some computer scientists say it’s increasingly important that they’re designed for this purpose. Part of the promise of machine learning systems is that computers will be able to process tremendous data streams—for purposes like facial recognition, for example. Entire industries are transforming as a result of these computing powers. With the proliferation of sensitive data flowing through vast networks, humans need to be able to tell computers when and precisely how to forget huge swaths of what’s called data lineage—the complex information, computations, and derived data that propagate brain-like computer networks.