Artificial brains are such energy hogs because they can be infinitely precise, meaning they can draw on colossal troves of data to do what they do. Consider, for example, a neural network used for pattern recognition—the kind of system that’s trained on a massive database of images to be able to recognize faces. The humongous dataset required to train the system is what makes it effective, but it’s also what prevents efficiency. In other words, engineers have figured out how to build computer systems that have astonishing memory capacity, but they still need huge amounts of power to operate them.
This is a problem for anyone who wants the technology behind a brain-inspired computer to be widely available, scalable down to the kinds of devices—say, smartphones—that ordinary people actually use. This scaling problem also helps explain why scientists are so interested in building computers that mimic the human brain to begin with; human brains are both highly sophisticated processors—people carry around a lifetime of memories, after all—and they are remarkably energy-efficient.
If engineers can figure out what makes a human brain run so well, and on so little energy relative to its processing power, they might be able to build a computer that does the same.
“But that has always been a mystery,” says Stefano Fusi, a theoretical neuroscientist at Columbia University’s Zuckerman Institute. “What we wanted to understand is whether we can take advantage of the complexity of biology to essentially build an efficient [artificial] memory system.”
So Fusi and his colleague, Marcus Benna, an associate research scientist at the institute, created a mathematical model that illustrates how the human brain processes and stores new and old memories, given the biological constraints of the human brain. Their findings, published today in a paper in the journal Nature Neuroscience, demonstrate how synapses in the human brain simultaneously form new memories while protecting old ones—and how older memories can help slow the decay of newer ones.
Their model shows that over time, as a person stores enough long-term memories and accumulates enough knowledge, human memory storage becomes more stable. At the same time, the plasticity of the brain diminishes. This change helps explain why babies and children are able to learn so much so quickly: Their brains are highly plastic but not yet very stable.
“That’s why there is a critical period for many abilities like learning languages,” Fusi says. “As you accumulate knowledge, it becomes extremely difficult to learn something new, much more difficult than it is for kids. That’s certainly reflected by any kind of model like ours, where you essentially have what is called metaplasticity.”
Metaplasticity, which refers to the way a synapse’s plasticity changes over time based on its past activity, is a crucial component of the model Fusi and Benna created. In older simulations—the kinds of neural networks that help power many existing machine-learning systems—each synapse is represented by a variable or value that can be tweaked indefinitely as the system runs. “But there’s nothing like that in nature,” Fusi says. “It’s not possible to have billions of different values for a synapse [in the human brain].”