---
Blog post 4: Does brain training actually work?
The pro-training evidence
ACTIVE Trial: Detailed in Blog Post 1 above. The 20-year follow-up (Coe et al. 2026) showing 25% dementia risk reduction with speed training plus boosters is the strongest single piece of evidence for cognitive training having real-world impact.
Jaeggi et al. 2008, PNAS 105(19):6829-33: 70 healthy young adults (mean age 25.6) trained on dual n-back working memory tasks for 8–19 days (~25 min/day). The study claimed working memory training transferred to improvements in fluid intelligence (measured by Raven's Advanced Progressive Matrices and BOMAT), with a dose-dependent relationship. This was the first study to suggest fluid intelligence might be trainable. However, multiple subsequent studies failed to replicate the far-transfer finding (Chooi & Thompson 2012; Redick et al. 2013). Criticisms include small sample sizes per condition and different intelligence tests used across groups.
Au et al. 2015, Psychonomic Bulletin & Review 22:366-377: A meta-analysis of n-back training studies found a small but significant positive effect on fluid intelligence. However, the lead and senior authors (Au and Jaeggi) are from the lab that produced the original 2008 study. Melby-Lervåg & Hulme (2016) reanalysed the same studies and found the effect disappeared when only active control groups were considered.
Basak et al. 2020, Psychology and Aging 35(2):220-249: A comprehensive meta-analysis of 215 studies from 167 articles. Overall effect size: g=0.28 (p<.001), significant for both near transfer (g=0.37) and far transfer (g=0.22). Single-component executive function training was most effective. All modules of multicomponent training yielded significant near and far transfer, including everyday functioning. Multicomponent training was more effective at generating far transfer than single-component training.
FINGER Trial — Ngandu et al. 2015, Lancet 385:2255-2263: N=1,260 adults aged 60–77 at increased dementia risk. A 2-year multidomain intervention (diet + exercise + cognitive training + social activity + vascular risk monitoring) produced 25% greater improvement in overall cognition, 83% greater improvement in executive function, and 150% greater improvement in psychomotor speed. Effect size was modest (Cohen's d=0.13). The trial has spawned the World-Wide FINGERS network (60+ countries).
The anti-training evidence
2014 Stanford Consensus Letter: Organised by the Stanford Center on Longevity and the Max Planck Institute for Human Development. Key organiser: Laura L. Carstensen, Director of Stanford Center on Longevity. Signed by more than 70 international psychologists and neuroscientists, including Robert A. Bjork (UCLA), Randy L. Buckner (Harvard), Roberto Cabeza (Duke), Fergus Craik (Toronto), Randall Engle (Georgia Tech), and Lynn Hasher (Toronto). Key statement: "The scientific literature does not support claims that the use of software-based 'brain games' alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease." A counter-letter signed by 127–133 scientists (spearheaded by Michael Merzenich, co-founder of Posit Science/BrainHQ) argued there IS "a large and growing body of evidence" — though many signatories had industry ties.
FTC vs. Lumosity (January 2016): $50 million judgment against Lumos Labs, suspended after payment of $2 million. The FTC challenged claims that games could delay memory decline, protect against dementia and Alzheimer's, improve school/work/athletic performance, and reduce effects of ADHD, PTSD, TBI, and chemotherapy. FTC Bureau Director Jessica Rich: "Lumosity preyed on consumers' fears about age-related cognitive decline… Lumosity simply did not have the science to back up its ads."
Simons et al. 2016, Psychological Science in the Public Interest 17(3):103-186: The most comprehensive review to date, examining all intervention studies cited on leading brain-training company websites. Conclusion: "Extensive evidence that brain-training interventions improve performance on the trained tasks. Less evidence that such interventions improve performance on closely related tasks. Little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance." None of the cited studies conformed to all of the best practices the authors identified as essential.
Melby-Lervåg, Redick & Hulme 2016, Perspectives on Psychological Science 11(4):512-34: Meta-analysis of 87 publications with 145 comparisons. Found reliable near transfer to working memory measures but "no convincing evidence of any reliable improvements" in far transfer (nonverbal ability, verbal ability, reading, arithmetic) when compared with active control conditions. The degree of working memory improvement did not predict the magnitude of far-transfer effects.
Sala & Gobet 2019, Collabra: Psychology 5(1):18: A second-order meta-analysis (a meta-analysis of meta-analyses). Tested three models spanning working memory training, video games, music, chess, and exergames. Critical finding: "When placebo effects and publication bias were controlled for, the overall effect size and true variance equaled zero." Conclusion: "The lack of generalization of skills acquired by training is thus an invariant of human cognition." Their 2023 follow-up (Perspectives on Psychological Science) titled "Cognitive Training: A Field in Search of a Phenomenon" reinforced this position.
The near versus far transfer debate
Near transfer (improvement on similar tasks) is well-established and uncontroversial. Far transfer (improvement on dissimilar tasks or real-world outcomes) is the crux of the debate. Without far transfer, brain training amounts to getting better at a game. The pattern is consistent: studies using active control groups (where controls do an alternative engaging activity) show smaller or null far-transfer effects compared to studies using passive controls (no contact), suggesting expectation, social engagement, or placebo effects inflate apparent benefits.
Training types with the strongest far-transfer evidence: (1) Adaptive speed-of-processing training (ACTIVE trial), (2) multicomponent training (Basak 2020), and (3) combined physical + cognitive training (meta-analyses show effect size g ≈ 0.316).
Recent developments (2024–2026)
The Coe et al. 2026 ACTIVE 20-year follow-up is the most significant development. The US-POINTER Trial (Baker et al. 2025, JAMA 334:681-691) tested structured versus self-guided multidomain lifestyle interventions. The SMARRT Trial (Yaffe et al. 2024, JAMA Internal Medicine 184:54-62) tested personalised risk-reduction strategies. A 2025 systematic meta-review of 39 articles (21 meta-analyses + 18 systematic reviews) concluded "the prevailing evidence supports cognitive training" while noting most reviews had low quality ratings. Jaeggi, Seitz, and Pahor (now at Northeastern) are recruiting 30,000 volunteers for a large-scale citizen science study to test which brain training works for whom.
The debate remains polarised, but the 2026 ACTIVE results have injected fresh momentum into the pro-training camp specifically for adaptive speed-of-processing training. The distinction between targeted, evidence-based training protocols and generic commercial "brain games" has become sharper than ever.
SERP analysis
Top results for "does brain training work" include Scientific American (nuanced, both sides), Simons et al. on PubMed, Johns Hopkins coverage of the 2026 results, and BrainHQ's industry page. People Also Ask: "Does brain training prevent dementia?", "Is Lumosity scientifically proven?", "What brain exercises actually work?", "Does brain training improve IQ?", "What's the difference between brain training and cognitive training?"
---