Beawiharta Beawiharta / Reuters

In the 1950s, an 8-year-old boy suffered a head injury in a road accident. The back left part of his brain was damaged, specifically the primary visual cortex. As a consequence, he went blind in a large part of his right visual field. Years after the injury, his neurologists uncovered something strange. He could still see on the right side, even though he didn’t entirely know it.

The neurologists told him to face a screen and look at a small cross at the center to stabilize his eyes. A single dot appeared in his blind area, and he was told to point to the dot. In frustration, he insists there is no dot. But then, he takes a wild guess, and points right at it. Try after try, as the dot is flashed in different locations in the blind area, he points accurately most of the time.

This bizarre phenomenon is called blindsight. It was discovered in the 1970s by British researchers Larry Weiskrantz, Nicholas Humphrey, and others. It’s caused by damage to the primary visual cortex, one part of a vast network of brain areas that process vision. Without that part, some aspects of vision are still possible but the conscious visual experience disappears.

Blindsight offers a tantalizing hint about human consciousness. It demonstrates the difference between merely processing visual information in the brain, like in a computer, versus having a reportable conscious experience of it.

But this hint from blindsight proved hard to interpret. Does the primary visual cortex somehow generate awareness? If so, what exactly is being generated and how does it get from the back of the brain into our speech circuitry so that we can say that we have it? Maybe the primary visual cortex doesn’t create awareness itself, but instead sends visual information to a different system in the brain that is more closely related to consciousness. But if that’s so, what is this brain system that causes consciousness, and how does it work?

For a while, it seemed as though blindsight would remain only a tantalizing hint, but in 1999, Robert Kentridge, Charles Heywood, and Larry Weiskrantz stumbled on a new quirk of blindsight. It’s easy to mistake their discovery for a minor detail, but it turned out to be one of the most important insights into consciousness in decades.

Imagine you’re looking at a screen. A distracting dot flashes on the right side. A fraction of a second later, a number appears at exactly the same location. Your job is to report the number as quickly as possible. Your response time is probably pretty fast because the initial dot automatically drew your attention to that location. On the other hand, suppose the dot flashes on the left side of the screen. A fraction of a second later, the number appears on the right side. Now you’re probably slower to read the number. The dot automatically drew your attention to the wrong side and it takes a moment for your attention to readjust. This simple experiment allows researchers to measure how much attention was snagged by that initial dot.

It turns out that in people with blindsight, the dot can snag attention even when it doesn’t snag conscious experience. Bizarrely, attention and awareness can be separated.

This finding was so startling that researchers were curious whether it might be true in anyone, not just people with brain damage. Imagine the same experiment I just described, but the initial dot is very dim and hidden in a distracting grid of colors and lines. Even if you don’t have clinical blindsight, you’ll swear you’ve seen no dot at all. And yet the dot can still snag your attention, sharpening your ability to process anything else that happens at the same location. You can attend to the dot even if you’re not aware of it.

For decades scientists used the terms “awareness” and “attention” more or less interchangeably, as though both referred to what happens when your mind takes hold of something. Blindsight has helped to pry the two concepts apart. We now know that we need a better theory of what they are and how they relate to each other.

One such theory is the Attention Schema Theory (AST), first proposed by my lab in 2011. In that theory, attention and awareness have a precise relationship to each other. Attention is a data-handling trick. It’s the brain’s way of focusing resources on some signals, boosting them and processing them at the expense of other signals. It’s a mechanistic process. Awareness is different. It’s more like the brain’s explicit knowledge about what it’s doing. The brain doesn’t have information about the microscopic details of attention, the neurons and the electrochemical signals, but it can give you a general account. It can say, “Yeah, I’ve got hold of that dot. I’m processing it. I have a kind of mental possession of it.” Awareness is the brain’s schematic description of attention.

In AST, attention is a constant process like a factory stamping out parts, and awareness is a constantly updated account of what the factory is doing, for quality control purposes. If you want to control something carefully, monitor it. If you doubt that, here’s a simple exercise and a favorite challenge for robotics experts. Balance a ruler vertically on your hand. It takes some practice, but you can get the hang of it. The skill depends on always watching the ruler. Your visual system registers what that stick is doing and computes what it’s likely to do next. Close your eyes, and you shut off that constant information. No matter how hard you try, the stick wobbles and falls.

AST makes a simple prediction. Take away awareness, and attention should start to wobble. We designed an experiment to test the prediction. (The results were recently published in the Journal of Cognitive Neuroscience.) The logic of the experiment is simple. A person looks at a screen and every few seconds a dot is flashed. Sometimes the participant is aware of the dot, sometimes not. We used many ways to manipulate awareness. A simple way is to use a very dim dot. We ask the person, “Did you see a dot?” and sometimes they say yes, sometimes no. Another way is to flash a distracting pattern on the screen around the same time as the dot. Depending on the exact timing of the pattern— the difference can be as subtle as a hundredth of a second— the dot either pops out as obviously visible or disappears and becomes perceptually invisible.

Each time a dot was flashed, we measured how much of the person’s attention was drawn to it, The hypothesis was simple. If you’re aware of the dot, then it should snag your attention. That attention should spike right after the dot appears, stabilize on the dot for a short while, and fade after maybe half a second. If you’re unaware of the dot, it should still snag your attention, but then the attention paid to the dot should fluctuate, just like a faulty machine when the control model is missing.

The results confirmed the prediction. Without awareness of the dot, people didn’t pay any less attention to it. But attention jagged up and down. And the change happened immediately. Within five hundredths of a second after the dot appeared, awareness already made a difference. Long before any normal reaction time, before subjects could have made any high-level choices or decisions, awareness had begun to smooth out attention.

There may be many interpretations, and only future experiments can sort out all the possibilities. But so far, the findings suggest that awareness is the stabilizing control model for attention, which means we now have a workable theory of what subjective awareness is and what role it plays in the brain.

We can now turn to other questions. We can put people in an MRI scanner while they perform these tests, and check to see what specific networks in the brain light up when they pay attention without conscious experience. What networks light up when awareness is added? How do those networks connect to each other and what happens when one or the other is disrupted? Something in the brain computes the information that allows us to say, “I have a conscious experience,” and we may be closing in on what that system is and how it works.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.