Updated at 10:17 a.m. ET on October 2, 2019.
On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.” If the galaxy-brain meme had existed then, it would have been a great time to invoke it.
It’s been exactly 10 years. He did not succeed.
One could argue that the nature of pioneers is to reach far and talk big, and that it’s churlish to single out any one failed prediction when science is so full of them. (Science writers joke that breakthrough medicines and technologies always seem five to 10 years away, on a rolling window.) But Markram’s claims are worth revisiting for two reasons. First, the stakes were huge: In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time). Second, the HBP’s efforts, and the intense backlash to them, exposed important divides in how neuroscientists think about the brain and how it should be studied.
Markram’s goal wasn’t to create a simplified version of the brain, but a gloriously complex facsimile, down to the constituent neurons, the electrical activity coursing along them, and even the genes turning on and off within them. From the outset, the criticism to this approach was very widespread, and to many other neuroscientists, its bottom-up strategy seemed implausible to the point of absurdity. The brain’s intricacies—how neurons connect and cooperate, how memories form, how decisions are made—are more unknown than known, and couldn’t possibly be deciphered in enough detail within a mere decade. It is hard enough to map and model the 302 neurons of the roundworm C. elegans, let alone the 86 billion neurons within our skulls. “People thought it was unrealistic and not even reasonable as a goal,” says the neuroscientist Grace Lindsay, who is writing a book about modeling the brain.
And what was the point? The HBP wasn’t trying to address any particular research question, or test a specific hypothesis about how the brain works. The simulation seemed like an end in itself—an overengineered answer to a nonexistent question, a tool in search of a use. When the Blue Brain Project, a related venture that Markram founded, released a simulation of 30,000 rat neurons in 2015—a mere 0.15 percent of the rodent’s tiny brain—critics billed it as a lot of work that revealed nothing new. Even if it could scale up to human-size in time, why should it? “Now you’d have a brain in a computer, and before you had a brain in a skull,” Lindsay says. “What does that tell you?”
Markram explained that, contra his TED Talk, he had never intended for the simulation to do much of anything. He wasn’t out to make an artificial intelligence, or beat a Turing test. Instead, he pitched it as an experimental test bed—a way for scientists to test their hypotheses without having to prod an animal’s head. “That would be incredibly valuable,” Lindsay says, but it’s based on circular logic. A simulation might well allow researchers to test ideas about the brain, but those ideas would already have to be very advanced to pull off the simulation in the first place. “Once neuroscience is ‘finished,’ we should be able to do it, but to have it as an intermediate step along the way seems difficult.”
“It’s not obvious to me what the very large-scale nature of the simulation would accomplish,” adds Anne Churchland from Cold Spring Harbor Laboratory. Her team, for example, simulates networks of neurons to study how brains combine visual and auditory information. “I could implement that with hundreds of thousands of neurons, and it’s not clear what it would buy me if I had 70 billion.”
In a recent paper titled “The Scientific Case for Brain Simulations,” several HBP scientists argued that big simulations “will likely be indispensable for bridging the scales between the neuron and system levels in the brain.” In other words: Scientists can look at the nuts and bolts of how neurons work, and they can study the behavior of entire organisms, but they need simulations to show how the former create the latter. The paper’s authors drew a comparison to weather forecasts, in which an understanding of physics and chemistry at the scale of neighborhoods allows us to accurately predict temperature, rainfall, and wind across the whole globe.
The analogy doesn’t work, says Adrienne Fairhall, a neuroscientist from the University of Washington who has a background in physics. Yes, large-scale simulations are useful for understanding weather and galaxies, but “planetary systems are not about anything other than themselves,” she says. “A brain is built to be about other things.” That is: It takes in information about the world, and it moves human and animal bodies, which then influence that world. How much would we really learn from a disembodied brain in a virtual jar, which isn’t connected to eyes, ears, or limbs? “You could take a chunk of tissue and do all the physics, but it wouldn’t get at what it’s all for,” Fairhall says. “Biology is matter that has meaning. Simulating the tissue is doable, but meaningless.”
The HBP, then, is in a very odd position, criticized for being simultaneously too grandiose and too narrow. None of the skeptics I spoke with was dismissing the idea of simulating parts of the brain, but all of them felt that such efforts should be driven by actual research questions. For example, Xiao-Jing Wang from New York University has built models that show how neurons, if connected in a certain way, can hold on to electrical activity even if they’re not being stimulated—the essence of what we call working memory, or the ability to hold on to thoughts. Meanwhile, Chris Eliasmith from the University of Waterloo has built a model called Spaun, which uses a simplified set of 2.5 million virtual neurons to do simple arithmetic and solve basic reasoning problems.
Countless such projects could have been funded with the money channeled into the HBP, which explains much of the furor around the project. In 2014, almost 800 neuroscientists wrote an open letter to the European Commission saying that “the HBP is not a well conceived or implemented project and that it is ill suited to be the centerpiece of European neuroscience.” A year later, a mediation committee agreed with the critics, asking the HBP to refocus its efforts “on a smaller number of properly prioritized activities” and to retool its unorthodox governance structure.
The HBP acquiesced. It effectively rebranded itself as a software project that curates existing data about the brain, provides tools for searching those data, and develops simulators that will allow others to build their own models. And with the big bolus of funding set to expire in 2023, the team’s recent paper reads like a plea for more investment. “The development of high-quality brain simulators requires a long-term commitment of resources,” they wrote.
In a statement, Katrin Amunts, the scientific director of the Human Brain Project says that, from the outset, the project drew from “a range of different neuroscience fields” to create a computational research infrastructure. “This research infrastructure for neuroscience in Europe—an idea which was proposed at the outset of the project—remains our answer to the challenge of how to decode the human brain,” she says, adding that the project is “creating something available nowhere else in the world, a single, integrated, platform for large-scale collaborative neuroscience.” That new infrastructure will be launched in the fall, under the name EBRAINS.
Maybe it’s telling, though, that the people I contacted struggled to name a major contribution that the HBP has made in the past decade. That’s not to say that such contributions don’t exist. It’s more that they don’t seem to have made a splash proportional to the project’s budget, or perhaps that the HBP still has to earn back the trust of a community it alienated through hype.
Markram seems undeterred. In a recent paper, he and his colleague Xue Fan firmly situated brain simulations within not just neuroscience as a field, but the entire arc of Western philosophy and human civilization. And in an email statement, he told me, “Political resistance (non-scientific) to the project has indeed slowed us down considerably, but it has by no means stopped us nor will it.” He noted the 140 people still working on the Blue Brain Project, a recent set of positive reviews from five external reviewers, and its “exponentially increasing” ability to “build biologically accurate models of larger and larger brain regions.”
No time frame, this time, but there’s no shortage of other people ready to make extravagant claims about the future of neuroscience. In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. You’re going to swallow a pill and know English. You’re going to swallow a pill and know Shakespeare. And the way to do it is through the bloodstream. So once it’s in your bloodstream, it basically goes through it and gets into the brain, and when it knows that it’s in the brain, in the different pieces, it deposits it in the right places.”
Over my left shoulder, a hushed voice whispered, “Wow.”