A 9-month-old participant in the Rutgers lab wears a stretchy bonnet with 128 EEG sensors.Andrew Hetherington/Courtesy of Rutgers University

Can you make your baby smarter?

Former Georgia governor Zel Miller thought so. In 1998, he proposed $105,000 be added to the state budget to buy every newborn a tape or CD of classical music. Mozart and Bach, he said, would help babies in their future engineering careers.

“Having that infant listen to soothing music helps those trillions of brain connections to develop,” he told the state general assembly in Atlanta. Then he played them an excerpt from the “Ode to Joy” section of Beethoven’s Ninth Symphony.

“Now don't you feel smarter already?” he said. “Smart enough to vote for this budget item, I hope.”

The budget passed, and, for a few years, new Georgian parents received a free classical recording along with their newborn child. The “Mozart Effect” has now been largely debunked—as has much other evidence of non-musical benefits to making or listening to music—but it lives on in popular consciousness. And little wonder why: It’s alluring that listening to something might make you smarter.

A new study says that some sounds can, in fact, improve a specific type of skill in young children. Infants that played a quasi-game that involved a computer-adjusted sound recording developed their acoustic maps faster and more accurately than children who did not.

Acoustic maps are one way neuroscientists understand how the brain processes sound—they show the brain “coding” sound into signal—and their strength and speed is a good predictor of future success at reading and other language skills.

The study was led by April Benasich, a professor of neuroscience and the director of the Infancy Studies Laboratory at Rutgers University.

“Babies can acoustically discriminate between every single sound in every single language in the world,” Benasich said. Over time, after learning which sounds are important or unimportant, they begin to suppress a brain response to sounds not in the native language.

“But some babies seem to focus on the wrong tier or grain,” she said. The training that her team developed coaxes babies into listening for the right speed of sound.

In the training, Benasich and her team invited four-to-seven-month-olds (and their parents) to their lab once a week, for six weeks. There, they played special recordings of whooshes and beeps to them for seven minutes. The recordings had tiny modulations and changes within them that mimicked the tiny modulations and changes of speech.

“The sounds,” Benasich said, were “not language but had language-like configurations. They told the babies, ‘Pay attention, this could be important!’”

That is, the oscillation and manipulation of the recordings happened at the level of tens of milliseconds. This is the scale of sound which is crucial for listening to language: Only 40 milliseconds, for example, differentiates a buh and a duh sound. If babies “caught” these changes, they looked at a specific screen at the right time, which—if the children’s timing was correct—rewarded them by playing a short video.

Here’s an example of those recordings, provided by the Benasich lab. Below, you’ll hear two sequences of beeps: first, a “standard” version; then, a “deviant” modification of the first sound. The second pieces in the sequence are only subtly different from the first, but they’re fairly easy to differentiate for our language-attuned brains:

At first, the changes in the sounds that the infants were asked to discern were easy to pick up. As the babies caught on, the sounds got progressively harder to hear. In effect, the game trained the babies to notice minute aural differences—exactly the training that comes in handy in language-learning.

Compared to babies who did not go through the training—and babies who only passively listened to the sounds, rather than receiving the video-associated training—the trained babies showed more responsiveness to certain kinds of aural stimuli. When measured by an EEG machine, their acoustic maps were shown to be more plastic, more accurate, and faster at discerning details in the sound.

“What this type of engagement could do is make those maps more prescise,” Benasich said. “Babies who are really, really good at this have a huge advantage.”

Eventually, Benasich and her team hope to use this training to help young children who are at risk of developing auditory processing disorders.

“I can look at a baby early on, and, with 90 percent accuracy, I can predict who’ll be a standard deviation above or below the [normal] response,” she told me. Seventy to 80 percent of infants with these type of auditory processing problems, she said, go on to develop reading disorders like dyslexia.

But being able to identify disorders before they fully manifest themselves doesn’t mean much right now. Doctors have to wait to see if a processing disorder develops—after the child learns to speak—before a child can be enrolled in speech therapy.

A baby connected to an EEG in the Rutgers lab (Hetherington/Courtesy of Rutgers University)

Benasich’s team eventual goal is to turn the auditory training into a toy that the baby could operate by themselves. If such a toy was effective, she said, many children might never develop auditory disorders in the first place.

At least, that’s her hope. She said that the original experimental group of infants continued to show strong benefits compared to children who didn’t receive the training to at least nine months of age, and that research on that group is ongoing.

Kathryn Hirsh-Pasek, a professor at Temple University and a psychologist of early childhood language learning who was not connected to the experiment, said the results seemed sound.

“It's exciting. It expands on previous research,” she said, referring to previous evidence that auditory maps could be improved. “It's definitely not a slam-dunk, but it’s a good first step.”

“Our first step in language-learning is to notice patterns in the sound-stream,” Hirsh-Pasek said. She confirmed that children who are not as good at this skill face problems. “[The results] speak to how we can tweak the experience to create a difference in how the brain responds to sounds.”

To Hirsh-Pasek, the greatest finding of the experiment was how it supported that learning in young children happened best when there was a goal in mind.

“We can't just expect learning to happen,” she told me. “It does, all the time—that’s implicit learning. But if we have a learning goal, explicitly focusing on that helps learning happen faster, especially with really young kids. Mere exposure isn't the same as doing something with information.”

And that’s true for all learners—not just four-month-old ones.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.