One of more than 3,000 brains in a collection at the psychiatric hospital in Duffel, Belgium Yves Herman / Reuters

Not many people get to contemplate their brain in a jar, but if all goes to plan then I’ll be in that curious position by Christmas.

Happily, I’ll still have the brain I’m using right now, which is how I’ll be able to do the contemplating. The other one will be my second brain. About the size of a frozen pea, it will have been grown from a small lump of flesh that researchers at the Institute of Neurology of University College London recently dug from my arm.

My skin cells will be transformed into a state akin to stem cells, which can grow into any type of tissue, using Nobel Prize-winning methods devised in the mid-2000s. These so-called induced pluripotent stem cells, or IPSCs, will then be gently coaxed into becoming neurons. Following much the same program as neurons in a fetus, the cultured cells will organize themselves into brain-like structures, taking on the identities of some of the brain’s different varieties of neurons and even starting to form hints of the familiar folds and convolutions.

The neurons will begin to send one another signals. We can’t properly call this thinking, but it constitutes the ingredients of thinking. My mini-brain won’t get any larger than its pea shape, however, because it will lack a blood supply: Above a certain size, the inner neurons would be deprived of oxygen and die.

The UCL folks are growing such mini-brains to study neurodegenerative diseases. By making these so-called organoids from the IPSCs of people with genetic predispositions to dementia-causing conditions such as Alzheimer’s, they can investigate how those genes create problems, and perhaps eventually find treatments. My mini-brain will be used as an anonymized healthy control sample for the research.

The author’s cells in culture, on their way to becoming induced pluripotent stem cells. (Chris Lovejoy and Charlie Arber / UCL Institute of Neurology)

I have no idea yet how I will respond to my own “brain in a jar.” But it has set me thinking about how pervasive this cultural trope is, and how much is invested in it. There is something disturbingly intimate about seeing, perhaps even touching, the brain of another person, and it’s not surprising that the image features in tales of transgression both real and fictional. A heart preserved in formalin is often seen as mere inert offal, but we seem to suspect that within the soft clefts of the human brain the person themselves somehow resides—or at least clues to what made them who they were.

So the brain in a jar has become a potentially misleading avatar of self. Its grey folded surface represents an illusory boundary between everything we know and everything outside of that knowledge.

* * *

To find the person, then, we go delving into the brain. Albert Einstein’s brain, removed by pathologist Thomas Stoltz Harvey after the great physicist’s death in 1955, was cut into slices and preserved. Harvey himself kept some of those fragments almost obsessively; others have now found their way into museums, where they have become macabre emblems of genius.

Rumours abound about why Einstein’s brain was “special,” but the truth is that everyone’s brain is likely to show some deviations from the norm. And while some behaviors can be linked to physical features of different brain regions, the structure of the brain itself responds to experience: We’re not just who we are because of the way our brain is, but vice versa. For example, UCL neurologists have found that the rear of a London taxi driver’s hippocampus, a brain region associated with memory and with navigation, will enlarge during training.

Still, the notion of brain as destiny persists. Think of Dr. Frankenstein’s crazed assistant Fritz giving him the “abnormal” brain of a criminal for his monster in James Whale’s 1931 movie, dooming the creature to be homicidal. (Mel Brooks’s Young Frankenstein spoofs that scene when Marty Feldman, the boggle-eyed assistant to Gene Wilder’s crazed doctor, tells his master that the brain belonged to “Abby Normal.”)

There are deviations from the respectable tradition of preserving brains for dissection that are far more grotesque than anything you find in Gothic horror novels. In the 1970s, it became clear that shelves of little brains in jars kept for decades in the basement of the Otto Wagner Hospital in Vienna were removed from children. The kids were held in a “special children’s ward” and murdered as “mental defectives” by command of the Nazi doctor Heinrich Gross, who apparently intended to study the anatomical causes of such “defects.”

Today, some people want their brain to end up in a jar by choice—not for the benefit of medical research, but because they figure they might need it again. Brain freezing is big business: Many hundreds of people have paid up to $200,000 or so for their bodies—or, for less than half that cost, just their heads—to be cryogenically preserved after death. The hope is that science will one day enable the brain to be revived and the person, in effect, to be brought back to life—and perhaps then to live forever. (You won’t necessarily want your original body, especially if you died from some fatal accident or illness.)

Currently there seems to be no actual prospect that a frozen brain could be revived. Experts point out that today’s cryogenic techniques inevitably cause damage to tissues, and that thawing would induce still more. But brain-freezing immortalists contend that the technology offers a glimmer of hope that death can one day be cheated. “If you can bridge the gap (it’s only a few decades), then you’ve got it made,” writes the computer scientist Ralph Merkle. “All you have to do is freeze your system state if a crash occurs and wait for the crash technology to be developed ... You can be suspended until you can be uploaded.”

A crash? Uploaded? You can see where this is going: The idea is that the brain is just a kind of computer, full of data that can be stored on a hard drive in a file labeled “You.”

As Merkle sees it, your brain is material, governed by the laws of physics; those laws can be simulated on a computer; therefore your brain can be, too. Although the network of neural connections in the brain is astronomically complex, we can put an upper limit on how many bits should be needed to encode it. Uploading the contents of a brain will need a computer memory of about 1018 bits, performing around 1016 logic operations a second, Merkle calculates. That’s perfectly imaginable with the current rate of technological advance.

According to this “transhumanist” vision, we will soon be able to live on inside computer hardware. The brain in a jar becomes the brain on a chip.

* * *

Such heady visions of brain downloads ignore the fact that the brain is not the hardware of the person but an organ of the body. Several experts in both AI and cognitive science argue that embodiment is central to experience and brain function. At the immediate physiological level, the brain doesn’t just control the rest of the body, but engages in many-channeled discourse with its sensory experience, for example via hormones in the bloodstream.

And embodiment is central to thought itself, according to the AI guru Murray Shanahan, who acted as a consultant on Alex Garland’s 2014 AI movie Ex Machina. Shanahan, a professor of cognitive robotics at Imperial College London, writes that cognition is largely about imagining the consequences of physical actions we might make in the world—a process of “inner rehearsal” of future scenarios.

In this view, then, the “brain in a jar” is not a feasible avatar of the entire human. One could argue that the brain-on-a-chip could be coupled to a robotic body that allows physical interaction with the surroundings, or even to just a simulation of a virtual environment. But Shanahan’s perspective raises questions about whether there is any purely mental “essence of you” that can be bottled in the first place.

The embodied aspect of the brain has long exercised philosophers, who debate whether what they call a “brain in a vat” alone can develop any reliable notion of truth about the world. The question stems from a hypothetical scenario: How do you know you’re not just a brain in a vat being presented with a simulated world? How, then, can you know that all your beliefs about the world are not false?

The question has entered popular culture via the Matrix movies, now almost an obligatory port of call for discussions around the philosophy of mind. But the predicament was grist for the philosophical mill long before the Wachowskis picked it up. The most celebrated critic of “brain in a vat” skepticism was the late American philosopher Hilary Putnam, who argued in 1981 that the whole notion is contradictory. Words and concepts used by a brain in a vat can’t be meaningfully applied to real objects outside of the brain’s experience, because the ability to have causal interaction with the specific things that words name is inherently how such words acquire meaning, Putnam argued. Even if there are actual trees in the world containing the vat that are simulated for the brain, the concept “tree” can’t be said to refer to them from the brain’s point of view.

The same is true for the words “brain” and “vat,” which to a brain in a vat can’t refer to actual brains and vats. The philosopher Anthony Brueckner expresses Putnam’s argument in a seemingly Zen-like turn of phrase: “If I am a Brain in a Vat, then I am not a Brain in a Vat.”

It’s hardly surprising that not everyone is persuaded by Putnam’s subtle argument against our right to be skeptical. The philosopher Thomas Nagel adds to the impression that philosophers seem here to be attempting to escape, Houdini-like, from the sealed glass jar of their own minds. So what if I can’t express my skepticism by saying Perhaps I am a brain in a vat and must instead say Perhaps I can’t even think the truth about what I am, because I lack the necessary concepts and my circumstances make it impossible for me to acquire them? That’s still pretty skeptical, Nagel says.

No wonder Neo just decided to shoot his way out of the problem.

* * *

The “brain in a vat” might sound like one of those reductio ad absurdum scenarios for which philosophers enjoy notoriety, but some think it is already a reality. The anthropologist Hélène Mialet used precisely that expression to describe the British physicist Stephen Hawking on his 71st birthday, in 2013. Hawking, who has famously been confined to a wheelchair for decades by amyotrophic lateral sclerosis, or ALS, is now unable to volitionally use just about any muscles except for slight movements of those in his cheek, which are linked to a computer system that allows him to communicate and interact with the world. Mialet argued that this essentially makes him a brain hooked up to machinery: He has become “more machine now than man,” like Darth Vader.

The description, intended only to highlight our own increasing dependency on machine interfaces, drew intense criticism and condemnation. But perhaps Mialet was merely articulating in a direct and confrontational fashion how many people have long viewed Hawking: as a brilliant brain trapped in a non-functioning body. His remarkable endurance in the face of a condition unimaginable to most of us fits comfortably—or uncomfortably—with our predisposition to stuff all our notions of humanity into the single organ that orchestrates our existence in the world.

It may be that my own little “brain in a jar” will challenge me about that. Just suppose we could give it a blood supply and let it keep growing to “full size.” What then would it experience? It’s an artificially ghoulish idea, but one that would worry me in the way that a full-grown “liver in a vat” would not. I would, I think, be forced to suspect that there was “someone” in there—and deep down, perhaps I’d suspect it was me.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.