You may already have read the hundreds of media articles today titled “brain training doesn’t work” and similar, based on the BBC “Brain Test Britain” experiment.
Once more, claims seem to go beyond the science backing them up … except that in this case it is the researchers, not the developers, who are responsible.
Let’s recap what we learned today.
The Good Science
The study showed that putting together a  variety of brain games in one website and asking people who happen to show up to play around for a grand total of 3–4 hours over 6 weeks (10 minutes 3 times a week for 6 weeks) didn’t result in meaningful improvements in cognitive functioning. This is useful information for consumers to know, because in fact there are websites and companies making claims based on similar approaches without supporting evidence. And this is precisely the reason SharpBrains exists, to help both consumers (through our book ) and organizations (through our report ) to make informed decisions. The paper only included people under 60, which is surprising, but, still, this is useful information to know.
A TIME article summarizes  the lack of transfer well:
“But the improvement had nothing to do with the interim brain-training, says study co-author Jessica Grahn of the Cognition and Brain Sciences Unit in Cambridge. Grahn says the results confirm what she and other neuroscientists have long suspected: people who practice a certain mental task — for instance, remembering a series of numbers in sequence, a popular brain-teaser used by many video games — improve dramatically on that task, but the improvement does not carry over to cognitive function in general.”
The Bad Science
The study, which was not a gold standard clinical trial, contained obvious flaws both in methodology and in interpretation, as some neuroscientists have started to point out. Back to the TIME article:
“Klingberg (note: Torkel Klingberg is a cognitive neuroscientist who has published multiple scientific studies on the benefits of brain training, and founded a company on the basis of that published work)…criticizes the design of the study and points to two factors that may have skewed the results.
On average the study volunteers completed 24 training sessions, each about 10 minutes long — for a total of three hours spent on different tasks over six weeks. “The amount of training was low,” says Klingberg. “Ours and others’ research suggests that 8 to 12 hours of training on one specific test is needed to get a [general improvement in cognition].”
Second, he notes that the participants were asked to complete their training by logging onto the BBC Lab UK website from home. “There was no quality control. Asking subjects to sit at home and do tests online, perhaps with the TV on or other distractions around, is likely to result in bad quality of the training and unreliable outcome measures. Noisy data often gives negative findings,” Klingberg says.”
More remarkable, a critic of brain training programs had the following to say in this Nature article :
“I really worry about this study — I think it’s flawed,” says Peter Snyder, a neurologist who studies ageing at Brown University’s Alpert Medical School in Providence, Rhode Island.
…But he says that most commercial programs are aimed at adults well over 60 who fear that their memory and mental sharpness are slipping. “You have to compare apples to apples,” says Snyder. An older test group, he adds, would have a lower mean starting score and more variability in performance, leaving more room for training to cause meaningful improvement. “You may have more of an ability to see an effect if you’re not trying to create a supernormal effect in a healthy person,” he says.
Second, the “dosage” was small, Snyder said. The participants were asked to train for at least 10 minutes a day, three times a week for at least six weeks. That adds up to only four hours over the study period, which seemed modest to Snyder.
Update (04/26): just found this comment by Michael Valenzuela, responding to Nature article:
“In our meta-analysis of cognitive brain training RCTs in healthy elderly*, doses of active training ranged from 10hours to 45 hours, with an average dosage of 33 hours. Overall, the effect was significant and robust.
The minimum cited total dose in the BBC study was 3 hours (10mins three times a week for 6 weeks), and an average number of sessions is given as 23.86 and 28.39 for the two experimental groups. What was the average duration of each session? This information is not provided, nor controlled for, so let us assume 20minutes per session, leading to an average total active training dose of 9.5hours.
The BBC study therefore did not trial a sufficient dose of brain training, leaving aside the issue of the quality of training.
This study was seriously flawed and its conclusions are invalid.
*Valenzuela M., Sachdev P. Can cognitive exercise prevent the onset of dementia? A systematic review of clinical trials with longitudinal follow up. American Journal of Geriatric Psychiatry 2009 17:179–187.”
The Ugly Logic
- We have decided to design and manufacture our first car ever
- Oops, our car doesn’t work
- Therefore, cars DON’T work, CAN’T work, and WON’T work
- Therefore, ALL car manufacturers are stealing your money.
- Case closed, let’s all continue riding horses.
Klingberg points out this too, stressing to TIME that the study “draws a large conclusion from a single negative finding” and that it is “incorrect to generalize from one specific training study to cognitive training in general.”
Posit Science dares to debunk the debunker  (I have been critical of several Posit Science’ marketing claims in the past, but in this case agree with what they are saying):
“This is a surprising study methodology,” said Dr. Henry Mahncke, VP Research at Posit Science. “It would be like concluding that there are no compounds to fight bacteria because the compound you tested was sugar and not penicillin.”
We do need serious science and analysis on the value and limitations of scalable approaches to cognitive assessment, training and retraining. There are very promising published examples of methodologies that seem to work (which the BBC study design not only ignored but somehow managed to directly contradict), mixed with many claims not supported by evidence. What concerns me is that this study may not only manage to confuse the public even more, but to stifle much needed innovation to ensure we are better equipped over the next 5–10 years than we are today to meet the demands of an aging society in a rapidly changing world.
- You can read the full paper Here  (opens free 5-page PDF)
- TIME article: Here 
- Nature article: Here 
- Wall Street Journal article: Here 
- My email to BBC six months ago: Here  (we gave the BBC full access to the January SharpBrains Summit on Technology for Cognitive Health and Performance ; they chose not to engage)
- Free online guide: Here 
- Consumer book: Here 
- Executive report: Here 
Previous SharpBrains articles: