Whether computerized games designed by psychologists and neuroscientists can literally make people smarter has been hotly debated by scientists, with a small but outspoken cadre of skeptics demanding stronger proof. Now two new studies have found the kind of real-world benefits from the brain-training games that skeptics have been calling for.
The first, published today in the Proceedings of the National Academy of Sciences, found that less than six hours of brain games played over the course of 10 weeks enabled poor first-graders who attend school irregularly due to family problems to catch up with their regularly-attending peers in math and language grades.
The second, presented over the weekend at the Cognitive Neuroscience Society meeting in Boston, combined the results of 13 previous studies of computerized brain-training in young adults to conclude that training significantly enhances fluid intelligence—the fundamental human ability to detect patterns, reason, and learn. That is, practicing the games literally makes people smarter.
Together with other recent studies demonstrating real-world benefits of brain training in healthy older adults, preschoolers, and school children with ADHD, the new papers appear to provide fresh ammunition to psychologists and neuroscientists whose research has been under attack by a handful of skeptics who insist that the training is a waste of time.
“Here’s what the critics have asked for,” said Jason M. Chein, associate professor of psychology and principal investigator of the Temple University Neurocognition Lab in Philadelphia. “They have said these studies don’t translate into real-world benefits. But in the hands of these scientists, the effects look positive.”
Even one of the most outspoken critics, who has published critical studies of training in academic journals and opinion pieces in The New York Times, offered muted praise for the new study of first graders.
“This is a step in the right direction: a study looking at whether this stuff actually transfers to academic performance,” said D. Zachary Hambrick, professor of psychology at Michigan State University. But because the study of first graders had methodological weaknesses and statistical oddities, he added, “It has to be replicated. I don’t find the results to be compelling.”
The study involved 111 impoverished first-graders living in the slums of Buenos Aires, Argentina. They were taken out of their classrooms for 15 minutes at a time, up to three times per week, for 10 weeks, to play either ordinary computer games or specially designed games intended to increase attention, planning, and working memory. Children who attended school regularly saw no significant gain on their school grades associated with the training. But those whose school attendance was erratic, presumably due to disordered home environments, improved enough in language and math for their grades to catch up with their classmates’.
“With very brief training, we improved the language and math grades of children who have problems at home,” said the senior author, Andrea P. Goldin, a research scientist at the Integrative Neuroscience Laboratory at the University of Buenos Aires. “That’s what we are very excited about, because we are helping to equalize a bit the opportunities these children have.”
Psychologists familiar with the study shared Hambrick’s concerns about its methodology, particularly the lack of statistical measures demonstrating the strength of the training effect, but also welcomed the attempt by Goldin and her colleagues to demonstrate real-life benefits.
“It’s a really problematic paper, but I hope that they go on with this and replicate it in a much larger sample, using much better experimental procedures,” said Douglas K. Detterman, editor of the journal Intelligence and professor emeritus of psychology at Case Western Reserve University.
“If the result gets replicated, then it will be very important,” said Earl B. Hunt, professor emeritus of psychology at the University of Washington in Seattle. “But I want the replication.”
The psychologist whose 2008 study in the Proceedings of the National Academy of Sciences first showed that brain-training games can increase fluid intelligence—thereby setting off an academic debate that has yet to be settled—agreed that the new study had flaws but was an admirable attempt.
“I’m not particularly overwhelmed by the data,” said Susanne M. Jaeggi, assistant professor and director of the Working Memory and Plasticity Lab at the University of California, Irvine. “But it’s a cool study. It’s a really nice way to demonstrate an effect in an applied setting. It’s really difficult to do that in low socioeconomic circumstances.”
Other recently published studies have demonstrated real-world benefits. In January, scientists reported that 10 years after a group of 2,832 older volunteers participated in a mere 10 hours of computerized cognitive training, those who were randomly assigned to programs designed to improve their reasoning and reaction times still showed measurable benefits in everyday activities.
That study was deemed significant enough to merit a statement from the National Institute of Aging, in which NIA Director Richard J. Hodes was quoted as saying, “These longer-term results indicate that particular types of cognitive training can provide a lasting benefit a decade later. They suggest that we should continue to pursue cognitive training as an intervention that might help maintain the mental abilities of older people so that they may remain independent and in the community.”
Other studies of real-world outcomes have led to mixed results, sometimes by the very same researchers. In July of last year, for instance, Susan Gathercole and Joni Holmes of the Cognition and Brain Sciences Unit at Cambridge University published a study in the journal Educational Psychology in which they found that computerized training of working memory significantly improved grades for elementary school children in math and English. But the very same pair published another study in November finding no classroom benefit.
Because of such confusion, researchers like Gathercole and Holmes are conducting larger studies designed to tease out whether or not the real-world benefits hold up. In the meantime, Jaeggi and Chein say they are heartened by the results of meta-analyses aggregating data from earlier, smaller studies to gather statistical strength.
“The meta-analyses will prove more telling than any one study,” Jaeggi said.
One of the first meta-analyses in the field, published last year and since widely cited by skeptics, concluded that there was “no convincing evidence” that computerized training of working memory improved any significant, general skills.
But a new meta-analysis, by researchers at Northwestern University, updated those results to include more recent studies. Unlike the earlier one, which mixed together studies of children, young adults, and older adults, the new study analyzed results only from 13 earlier studies of young healthy adults. It found strong statistical evidence for consistent gains in measures of fluid intelligence. (The study is listed as paper G71 in the program of the Cognitive Neuroscience Society meeting.)
A co-author of the new meta-analysis said in an email that his group is still analyzing the results.
“That said, we’re pretty confident of the main finding that a proper meta-analysis shows a consistent and reliable, if modest, effect of working memory training on fluid intelligence measures,” said Paul J. Reber, professor of psychology at Northwestern University.