Our Self-evaluation

An Analysis of BrainHQ Exercises Using the Institute of Medicine Criteria for the Evaluation of Cognitive Training Programs

Henry W. Mahncke*, Michael M. MerzenichPosit Science Corporation

Introduction

In May 2015, the Institute of Medicine (recently re-named the “National Academy of Medicine,” and hereafter referred to as “IoM”) released a report entitled “Cognitive Aging: Progress in Understanding and Opportunities for Action.”1 This 318-page report was authored by a 16-member committee composed of leading academic scientists “charged with assessing the public health dimensions of cognitive aging with an emphasis on definitions and terminology, epidemiology and surveillance, prevention and intervention, education of health professionals, and public awareness and education.” To develop the report, the authors conducted a literature review, held four public and private meetings, received presentations from 19 researchers, and invited an additional 17 researchers to review and comment on the final report.

A relatively brief part of the IoM report reviewed the field of cognitive training (generally, pages 187-190). The report noted promise from several specific clinical trials, including the Advanced Cognitive Training for the Independent and Vital Elderly (ACTIVE) study and the Improvement in Memory with Plasticity-based Adaptive Cognitive Training (IMPACT) study. In discussing the ACTIVE study, the report commented that “training in speed of processing and reasoning resulted in cognitive improvements”† and that these results “showed that those trained to enhance visual processing speed were significantly less likely to be involved in at-fault [automobile] crashes.”‡ With regard to the IMPACT study, the report noted that “older adults in the active group improved, relative to the control group on auditory measures from the Repeatable Battery for the Assessment of Neuropsychological Status as well as on other measures of attention and memory.”§ The IoM report goes on to note inconsistencies in the scientific literature, and that “transfer results for older adults have been mixed, with some studies failing to observe any transfer.”**

This section of the IoM report concluded with the statement that “ongoing debate by experts in the field about the utility of commercial cognitive training games points to the need for careful evaluation of these efforts”†† and suggests five specific criteria by which the effectiveness of a cognitive training program should be evaluated:

“Has the product demonstrated transfer of training to other laboratory tasks that measure the same cognitive construct as the training task?

Has the product demonstrated transfer of training to relevant real-world tasks?

Has the product performance been evaluated using an active control group whose members have the same expectations of cognitive benefits as do members of the experimental group?

How long are the trained skills retained?

Have the purported benefits of the training product been replicated by research groups other than those selling the product?”

The epigraph from the IoM report states that “Knowing is not enough; we must apply. Willing is not enough, we must do.”‡‡ Towards this goal, we conducted a self-evaluation of BrainHQ, a cognitive training program developed by Posit Science, guided by the five requirements specified in the IoM report. We describe the results of this evaluation in this white paper.

Methods

Literature Review

We created a database of publications involving BrainHQ exercises, based on our existing records, Google Scholar searches (incorporating PubMed) for senior authors of existing publications, and Google Scholar searches for the terms “BrainHQ”, “Brain HQ”, “Posit Science”, “Brain Fitness Program”, “processing speed training”, and “Useful Field of View”. We did not attempt to systematically identify trials from clinicaltrials.gov, or studies submitted for publication; studies accepted for publication that we were aware of were considered. Each publication was confirmed to involve at least one cognitive training exercise from BrainHQ, then classified as to publication type (randomized controlled trial, non-randomized trial, protocol, review, commentary), and to participant population (healthy adults aged 18 and above, children aged under 18, and various clinical indications).

The current analysis was developed in response to the IoM report on Cognitive Aging, and thus focuses on randomized controlled trials in the healthy adult population. To be classified as a randomized controlled trial, the study was required to be a randomized, controlled, prospective, parallel arm trial. Case studies and case study series were excluded, as were single-arm studies and non-randomized trials. Studies with active control and no-contact/wait list controls were included. To be classified as a study involving a healthy adult population, study inclusion/exclusion criteria were required to be generally directed at healthy adult populations, and exclude clinical conditions. Specifically, studies with inclusion criteria focused on mild cognitive impairment, Alzheimer’s disease, Parkinson’s disease, and other clinical conditions were excluded from the current analysis. The complete database can be viewed at https://www.zotero.org/groups/cognitive_training_data/

BrainHQ Exercises

BrainHQ is a software system composed of a number of cognitive training exercises. Different studies in the study database used different sets of exercises. For reference, we here describe the various sets of exercises used, and their cognate BrainHQ exercise names.

In particular, we note that the cognitive training program referred to as “speed training” in numerous publications, originally developed and then iteratively refined by Ball and Roenker2, was acquired by Posit Science in 20073. Speed training was updated to run as a CD-ROM exercise (as part of the InSight program, as an exercise called Road Tour), and then as a web-based and mobile exercise (as part of the BrainHQ program, as an exercise called Double Decision). A concordant validation study established the functional equivalence of the new version4.

Overall Approach and Operational Definitions of IoM Criteria

Our goal was to evaluate each publication and identify the specific IoM criteria that each publication addressed. To do so, we constructed operationalized definitions of each of the five IoM criteria:

Has the product demonstrated transfer of training to other laboratory tasks that measure the same cognitive construct as the training task? For the purposes of this evaluation, we included as “laboratory tasks” all quantitative pencil & paper neuropsychological assessments (e.g., the Rey Auditory Verbal Learning Test) and standardized computerized cognitive assessments with published administration methods and normative data (e.g., the Useful Field of View assessment). We excluded measures that showed progress (or lack of progress) in the cognitive training exercise performance itself.

We specifically note that although speed training and the Useful Field of View assessment are clearly related, the original developers consider them to be meaningfully distinct (e.g., Ball5, page 20). Thus, for the purposes of this evaluation, we classified the Useful Field of View assessment as a laboratory measure of transfer to the speed and attention domains, due to the standardization of its administration protocols6, the availability of normative data7, and the common use of this assessment outside of cognitive training studies8–10.

Has the product demonstrated transfer of training to relevant real-world tasks? For the purposes of this part of the evaluation, we included all directly observed functional measures (e.g., Timed Instrumental Activities of Daily Living), data from real-world observations (e.g., automobile crash incidence), and self-report measures (e.g., the SF-36, measure of health-related quality of life).

Has the product performance been evaluated using an active control group whose members have the same expectations of cognitive benefits as do members of the experimental group? For the purposes of this part of the evaluation, we excluded all studies with no-contact or wait-list controls, and included active control groups designed to control for placebo and expectation effects (e.g., adult learning activities, crossword puzzles, health education, video games).

How long are the trained skills retained? For the purposes of this part of the evaluation, we included all studies that included a follow-up assessment after a period of time following the completion of cognitive training.

Have the purported benefits of the training product been replicated by research groups other than those selling the product? For the purposes of this part of the evaluation, we operationalized this criterion to define key benefits of the training program, and note replication of such benefits by studies with principal investigators who had no personal financial conflict of interest with Posit Science.

Major Study Review

These 52 publications were derived from 25 distinct trials (Appendix Table A-1), with between 22 and 2,832 participants. Here, we describe the general characteristics of the three largest studies.

ACTIVE: The Advanced Cognitive Training for the Independent and Vital Elderly (ACTIVE, NCT00298558)11 was a randomized, controlled, prospective, parallel arm trial, jointly organized and funded by multiple Institutes from the National Institutes of Health, including the National Insitute on Aging and the National Institute of Nursing Research, and the with no involvement from Posit Science. ACTIVE enrolled 2,832 participants aged 65 and above without dementia from six study sites. The study compared three cognitive training interventions (memory training, reasoning training, and speed training) and a no-contact control group. The speed training program was the original cognitive training exercise that later became the Double Decision exercise in BrainHQ. Training groups completed a total of ten hours of cognitive training spread over a period of two months, with a subset of participants completing an additional four hours of booster training at each of the 11-month and 35-month time points. Outcome measures included transfer measures to laboratory tasks (e.g., cognitive assessments of memory, reasoning, and speed); and to real-world measures (e.g., timed instrumental activities of daily living, instrumental activities of daily living, health-related quality of life, auto crash incidence). Participants were followed for 10 years following the completion of training.

IHAMS: The Iowa Healthy and Active Minds Study (IHAMS, NCT01165463)12 was a randomized, controlled, prospective, parallel arm trial, funded by the NIH. Posit Science provided the cognitive training software and technical support, but did not otherwise fund or support the study. IHAMS enrolled 681 participants aged 50 and above, without dementia. The study compared speed training (the CD-ROM version referred to as Road Tour) to an active control (computerized crossword puzzles), and was designed to explicitly compare younger and older populations, and in-home and in-clinic training. Outcome measures included transfer measures to laboratory tasks (e.g., cognitive assessments of speed, attention, and executive function) and to real-world measures (e.g., IADLs, mood). Participants were followed for a year after the completion of training.

IMPACT: The Improvement in Memory with Plasticity-based Adaptive Cognitive Training (IMPACT, NCT00283010)13 trial was a randomized, controlled, prospective, parallel arm trial, sponsored by Posit Science and conducted with Mayo Clinic and the University of Southern California. IMPACT enrolled 487 participants aged 65 and above without dementia from three study sites. The study compared cognitive training (Brain Fitness Program) to an active control (DVD-based adult education courses with quizzes). Outcome measures included transfer measures to laboratory tasks (e.g., neuropsychological assessments of memory and attention) and to real-world measures (i.e., participant-reported outcome measures). Participants were followed for three months after the completion of training.

1) Has the product demonstrated transfer of training to other laboratory tasks that measure the same cognitive construct as the training task?

Numerous publications with BrainHQ exercises document transfer of the effects of training to “other laboratory tasks measuring the same cognitive construct as the training task.” Key examples include

ACTIVE, which showed improvement on the Useful Field of View task, a standardized measure of visual speed and attention14.

IHAMS, which showed improvement on the Useful Field of View task, Trails A, Trails B, the symbol-digit modalities test, and the Stroop word test15.

IMPACT, which showed improved on the Rey Auditory Verbal Learning task (immediate and delayed measures), letter-number sequencing, and digit span; as well as a pre-specified composite measure composed of the auditory tests of the Repeatable Battery for the Assessment of Neurospsychological Function13.

Across all publications, transfer results have been shown in standard laboratory measures of memory13,16–27, attention4,13–15,18,20,25,27–43,speed4,14,15,23,28–44, executive function15,24,26, and hearing & language23,45.

2) Has the product demonstrated transfer of training to relevant real-world tasks?

ACTIVE, which showed protection against decline in instrumental activities of daily living14,31,40 equivalent to a ~24 month preservation of the skills to maintain independent living, transfer to a real-world processing speed measure (for the group receiving booster training)14,31, and a 48% reduction in the number of at-fault automobile crashes recorded by state departments of motor vehicles46.

IHAMS, which showed transfer to instrumental activities of daily living and standard measures of mood and depression47.

3) Has the product performance been evaluated using an active control group whose members have the same expectations of cognitive benefits as do members of the experimental group?

Numerous publications with BrainHQ exercises are based on trials with an active control group “whose members have the same expectations of cognitive benefits as do members of the experimental group.” Key examples include

ACTIVE, which included three types of cognitive training for comparison purposes, including memory training, reasoning training, and speed training; as well as a no-contact control group11.

IHAMS, which used computerized crossword puzzles as an active control12.

4) How long are the trained skills retained?

Numerous publications with BrainHQ exercises have evaluated the retention of both transfer to laboratory measures and real-world behaviors. Key examples include

ACTIVE, which documented retention at the two-year14, five-year31, and ten-year40 follow-up points; and specifically measured how to most effectively maintain performance39.

IHAMS, which documented retention at the one-year follow-up point15,47.

IMPACT, which documented retention at the three-month follow-up point18

Other publications examined retention over intervals between three months and three years17,20,26,30,32,44,59.

5) Have the purported benefits of the training product been replicated by research groups other than those selling the product?

All benefits from BrainHQ exercises have been either originally established or replicated by “research groups other than those selling the product.”

A single class of key benefits, that BrainHQ exercises improve memory and attention, was originally established by studies led by Posit Science internal teams16,17. This benefit was subsequently confirmed and extended by the IMPACT study, sponsored by Posit Science through institutional research grants to Mayo Clinic and the University of Southern California, with analyses supervised by an independent Data Review Committee that oversaw the scientific publications13,18,19.

All other key classes of benefits were originally established by independent research groups, and frequently replicated by further independent research groups. Two key examples of transfer to real-world tasks include

Speed training improves timed instrumental activities of daily living, a measure of everyday real-world speed, a benefit originally established by a research group at the University of Alabama28,29, then replicated by the ACTIVE study14,31, conducted by Johns Hopkins University, University of Alabama, University of Pennsylvania, Indiana University, University of Florida, Hebrew Rehabilitation Center for the Aged, New England Research Institute, University of Iowa, and the National Institutes of Health11.

Speed training improves depressive symptoms and instrumental activities of daily living, a measure of the skills require for independent living, benefits originally established in the ACTIVE study14,31,40,48,49, and then replicated by the IHAMS study47, conducted at the University of Iowa12.

Negative Results

We note two types of negative results arose in the literature review.

Trials showing overall negative results: A single study (Memory and eXercise, MAX)61, employed a 2×2 factorial design in 126 older adults with cognitive complaints, comparing active cognitive training (BrainHQ exercises) to placebo (DVD-based adult education), fully crossed against active physical training (aerobic exercise) to placebo (stretching, strength). No significant differences were demonstrated on a global cognitive measure61, and groups in both putative placebo reported improved sleep quality62. The authors did note a trend level effect of cognitive training in individuals with low memory scores at baseline. The lack of effect in both the cognitive training and physical training interventions were noted as surprising by the authors, and we note other physical/cognitive exercise combination trials have shown positive results25. We are not aware of any others studies in healthy adults with overall negative results.

Trials with a mix of positive and negative outcome measures: Many trials employ a comprehensive set of outcome measures, and some show a mix of results across outcome measures. For example, the IHAMS trial showed transfer to Trails A, Trails B, the symbol-digit modalities test, and the Stroop word test; but not to verbal fluency or the digit vigilance test15 or to a composite outcome like that used in MAX63.

Summary of Results

The results of the literature review are summarized in the below table. Note that some trials did not evaluate transfer to laboratory measures, some did not evaluate transfer to real-world tasks, and some did not evaluate maintenance of effect. All 25 trials were evaluable for the active control and independence criteria.

IoM criterion

Trials Evaluating Criterion

Trials Meeting Criterion

Trials Not Meeting Criterion

1) Transfer to Laboratory Measures

23

21

2

2) Transfer to Real-World Measures

10

9

1

3) Active Control

25

15

10***

4) Maintenance of Effect

9

9

0

5) Independence

25

22

3

Discussion

Summary

A broad acceptance of the critieria required to establish the effective of a cognitive training program could help the field, allowing users, health care providers, and other stakeholders to make evidence-based decisions regarding how to put the technology to use.

Relevant Literature Outside The Scope of the Criteria

Although outside the immediate scope of this analysis of the requirements specified in the IoM Cognitive Aging report, it is of interest to note that multiple studies of BrainHQ exercises using brain imaging technologies have been performed by independent research groups, documenting the underlying neurological changes driven such training, using EEG21–23,27,36,44,45, fMRI33, PET64, diffusion tensor imaging55, and pupillometry65.

In addition, BrainHQ exercises have been used in multiple studies with people with clinically significant cognitive impairment (e.g., chemobrain66, HIV-associated neurocognitive impairment67, heart failure68, mild cognitive impairment69, and schizophenia70). These studies have shown benefits that transfer to both laboratory and real-world tasks, which persist over time, include comparison to active controls, and were conducted by independent academic researchers.

A Note on the Interpretation of the ACTIVE Study, Far Transfer, and Transfer to Real-World Measures

It is a common summary of the ACTIVE study to state that ACTIVE documented near transfer but not far transfer, and this is taken to mean that ACTIVE has not shown real world benefits (e.g., Span71). However, this is not an accurate summary of the results from ACTIVE72. The investigators specified three a priori hypotheses in their protocol.

First, that there would be limited or no transfer between the speed, memory, and reasoning training programs and their respective proximal endpoints (Jobe73, page 455).

Second, that each training program would show benefits to the co-primary instrumental activities of daily living outcome measure (the MDS Home Care measure), and to an additional co-primary outcome measure of everyday cognitive function specific to the trained domain (in the case of speed training, to the everyday speed measure, composed of timed instrumental activities of living and the complex reaction time task) (Jobe73, page 456).

Third, that each training program would show benefits on a set of secondary outcome measures, including health-related quality of life and depressive symptoms.

This result is scientifically interesting and sheds light on the neurological mechanisms of transfer, but was not a surprise.

The ACTIVE study confirmed those hypotheses. Speed training showed larger improvements on the proximal measure of speed and attention, and generalized to the a priori outcome IADL measure14,31,40, and to the a priori everday speed measure (in the case of the group engaging in booster sessions14,31)

This confusion regarding transfer is likely to derive from the imprecise definitions of “far transfer” and “real-world benefits.” Far transfer, originally defined by Ceci74 (for studies of education) and then extended by Zelinski75 (to studies of cognitive aging), is generally meant to meant improvement on a measures that are in different cognitive domains than that targeted by the training program, a definition that includes real-world measures as well. However, it is not the complete story on far transfer, because ACTIVE documented numerous examples of transfer to real-world tasks from speed training†††, including to instrumental activities of daily living14,31,40 (an a priori primary outcome measure11), as well as real-world measures of everyday speed (in the booster group)14,31, depressive symptoms48,49, health-related quality of life50,51, self-rated health52, locus of control53, predicted medical expenditures54, driving cessation58, confident driving behaviors60, and real-world automobile crash incidence46. These results show that ACTIVE has demonstrated far transfer of cognitive training to real-world tasks multiple times.

Research Directions

Nothing in this white paper should be taken to mean that research on BrainHQ exercises or in cognitive training in general is complete. More research is clearly necessary, and will be for the forseeable future. Key questions include the neurological basis for cognitive enhancement, how to optimize training intensity and frequency to sustain benefits over long periods of time, the identification of sub-populations who are either more or less likely to benefit from specific types of cognitive training, how to effectively combine cognitive training with physical exercise and nutritional interventions, and how to deliver the training in community-based settings where it can be accessible to all.

In addition, a central question of how cognitive training may affect the onset of Alzheimer’s disease and other dementias is as of yet unknown. Results from ACTIVE show that 10 hours of cognitive training did not provide a statistically significant reduction in the risk of the onset of dementia at the five-year follow up point, however it was noted in that the analysis that the risk ratios were in the protective direction (17% risk reduction for the speed training group), and that the study was underpowered to detect statistical significance at this effect size76. A study with a significantly more intensive cognitive training intervention that was sustained over time, in a larger population at higher risk for dementia and followed for a longer period of time would be of substantial interest.

Summary

A review of the published scientific literature describing randomized, controlled clinical trials of BrainHQ exercises in healthy adults documents that these cognitive training exercises have:

demonstrated transfer of training to other laboratory tasks that measure the same cognitive construct as the training task,

demonstrated transfer of training to relevant real-world tasks,

been evaluated using an active control group,

been evaluated to document how long the trained skills are retained, and

been replicated by research groups other than Posit Science.

In addition to the scientific literature in aggregate meeting the five IoM criteria, several trials individually meet each of the five IoM criteria. ACTIVE, IHAMS, and IMPACT each demonstrated transfer to laboratory tasks and real-world tasks, were evaluated using active controls, documented retention of skills, and have been replicated by independent research groups.

In doing so, BrainHQ meets the Institute of Medicine criteria for an evidence-based cognitive training program.