According to medieval medicine, laziness is caused by a build-up of phlegm in the body. The reason? Phlegm is a viscous substance. Its oozing motion is analogous to a sluggish disposition.
The phlegm theory has more problems than just a few factual errors. After all, suppose you had a beaker of phlegm and injected it into a person. What exactly is the mechanism that leads to a lazy personality? The proposal resonates seductively with our intuitions and biases, but it doesn’t explain anything.
In the modern age we can chuckle over medieval naiveté, but we often suffer from similar conceptual confusions. We have our share of phlegm theories, which flatter our intuitions while explaining nothing. They’re compelling, they often convince, but at a deeper level they’re empty.
One corner of science where phlegm theories proliferate is the cognitive neuroscience of consciousness. The brain is a machine that processes information, yet somehow we also have a conscious experience of at least some of that information. How is that possible? What is subjective experience? It’s one of the most important questions in science, possibly the most important, the deepest way of asking: What are we? Yet many of the current proposals, even some that are deep and subtle, are phlegm theories.
The oscillation theory of consciousness became popular in neuroscience in the 1990s and still has its adherents. Here’s a quick summary. The brain is composed of neurons, brain cells that process and transmit information in the form of electrochemical signals. If you measure the activity of a single neuron, in some cases that activity follows an oscillating pattern, rising and falling in a regular rhythm. Maybe consciousness is caused by those neuronal oscillations. Were there no oscillations, the theory goes, information would be processed without any accompanying subjective awareness. When information is treated with neuronal oscillations, then a subjective experience arises. For example, when color information is handled by oscillating neuronal activity, that process gives rise to an inner experience of color.
Neuronal oscillations probably do play an important role in the flow of information in the brain, although the exact role is debated. And yet as an explanation of consciousness, it’s a phlegm theory. It appeals to intuition and explains nothing.
Most people have a set of intuitions about consciousness. The physical brain somehow plays host to consciousness, but we suspect that consciousness itself isn’t a physical substance. It’s more ethereal, like a kind of energy. In a similar way, a tuning fork is a physical thing, and its vibrations are the energy associated with it. The vibrations aren’t themselves a physical substance. If you don’t think too hard about it, the vibrations relate to the tuning fork like consciousness relates to the brain. That analogy is even built into our language. We talk about our minds being tuned to this or that vibe. The idea that consciousness is an oscillation in the brain flatters our intuitions. It feels right.
But the theory provides no mechanism that connects neuronal oscillations in the brain to a person being able to say, “Hey, I have a conscious experience!” You couldn’t give the theory to an engineer and have her understand, even in the foggiest way, how one thing leads to the other. There is not even an attempt at an explanation. It’s a phlegm theory. It appeals to intuition at a superficial level while providing no scientific explanation for the phenomenon.
Another popular explanation of consciousness is the integrated information theory. Actually, there are several different theories that fit into this same general category. They share the underlying idea that consciousness is caused by linking together large amounts of information. It’s one thing to process a few disconnected scraps of information. But when information is connected into vast brain-spanning webs, then, according to the proposal, subjective consciousness emerges.
I can’t deny that information is integrated in the brain on a massive scale. Vast networks of information play a role in many brain functions. If you could de-integrate the information in the brain, a lot of basic functions would fail, probably including consciousness. And yet, as a specific explanation of consciousness, this one is definitely a phlegm theory.
Again, it flatters intuition. Most people have an intuition about consciousness as an integrated whole. Your various impressions and thoughts are somehow rolled together into a single inner you. That’s the impression we get, anyway.
You see this same trope in science fiction: If you bundle enough information into a computer, creating a big enough connected mass of data, it’ll wake up and start to act conscious, like Skynet. This appeal to our latent biases has given the integrated information theory tremendous currency. It’s compelling to many respected figures in the field of neuroscience, and is one of the most popular current theories.
And yet it doesn’t actually explain anything. What exactly is the mechanism that leads from integrated information in the brain to a person who ups and claims, “Hey, I have a conscious experience of all that integrated information!” There isn't one.
If you point a wavelength detector at the sky, it will compute that the sky is blue. If you build a machine that integrates the blueness of the sky with a lot of other information – the fact that the blue stuff is a sky, that it’s above the earth, that it extends so far here and so far there – if the machine integrates a massive amount of information about that sky – what makes the machine claim that it has a subjective awareness of blue? Why doesn’t it just have a bunch of integrated information, without the subjective awareness? The integration theory doesn’t even try to explain. It flatters our intuitions while explaining nothing.
Some scholars retreat to the position that consciousness must be a primary property of information that cannot be explained. If information is present, so is a primordial, conscious experience of it. The more information that is integrated together, the richer the conscious experience. This type of thinking leads straight to a mystical theory called panpsychism, the claim that everything in the universe is conscious, each in its own way, since everything contains at least some information. Rocks, trees, rivers, stars. This theory is the ultimate in phlegm theories. It has enormous intuitive appeal to people who are prone to project consciousness onto the objects around them, but it explains absolutely nothing. One must simply accept consciousness as an elemental property and abandon all hope of understanding it.
When I talk to other scientists about the study of consciousness, very often the first thing I’m asked to explain is why the topic is worth scientific attention. I argue that it’s not just a topic for philosophers or poets, and it’s not just a matter of opinion or belief. We can actually build rational theories of consciousness, theories that have explanatory power and that can be tested experimentally. And it’s crucial knowledge. Consciousness has a specific, practical impact on brain function. If you want to understand how the brain works, you need to understand that part of the machine. No neuroscientist, and no expert in artificial intelligence, should scoff at consciousness.
Here’s how we can construct theories that do a better job of explaining, even if they appeal less to our biases and intuitions. The brain is an information-processing machine. It takes in data, transforms it, and uses it to help guide behavior. When that machine ups and says, “Hey, I have a conscious experience of myself and the things around me,” that assertion is based on data computed in the brain. As scientists we can ask a series of basic questions. How did the machine arrive at that self-description? What’s the specific, adaptive use of that self-description? What networks in the brain compute that type of information? These are all scientifically approachable questions. And we are beginning to see specific, testable theories that can answer them. The theories that show the most promise are sometimes called metacognitive theories. They are theories of how the brain computes information about itself and its own processes.
The brain constructs packets of information, virtual models, that describe things in the world. Anything useful to monitor and predict, the brain can construct a model of it. These simulations change continuously as new information comes in, and they’re used to guide ongoing behavior. For example, the visual system constructs rich, detailed models of the objects in the visual world—a desk, a car, another person. But the brain doesn’t just model concrete objects in the external world. It also models its own internal processes. It constructs simulations of its own cognition.
And those simulations are never accurate. They contain incomplete, sometimes surreal information. The brain constructs a distorted, cartoon sketch of itself and its world. And this is why we’re so certain that we have a kind of magic feeling inside us.
This type of theory can explain some things and not others. It does not explain how the brain generates consciousness. It explains why we claim to have consciousness and why we’re so certain of that claim. It gives a general outline for a machine that processes information and, in the act of doing so, concludes that it has a subjective experience of that information. The machine has no way of realizing that this self-description is, well, not totally wrong, but distorted. What it has is a deep processing of information. What it concludes it has is something else—conscious experience.
The approach definitely doesn’t resonate with our common intuitions and biases. In this type of theory, consciousness isn’t magical. It isn’t mysterious. It isn’t a vibration. It doesn’t emerge like an energy. It’s not even very hard to understand. It’s a surreal, cartoonish description, a self-portrait. The theory has none of the intrinsic appeal of a good, crowd-pleasing phlegm theory. But a theory doesn’t have to be emotionally satisfying to be true.
The explanation is sound enough that in principle, one could build the machine. Give it fifty years, and I think we’ll get there. Computer scientists already know how to construct a computing device that takes in information, that constructs models or simulations, and that draws on those simulations to arrive at conclusions and guide behavior. Every component is buildable at least in principle, even if the details are beyond current knowledge. With a phlegm theory, you can’t build artificial consciousness, any more than you can make people lazy by injecting phlegm into them.