Abstract

Increased preference for immediate over delayed rewards and for risky over certain rewards has been associated with unhealthy behavioral choices. Motivated by evidence that enhanced cognitive control can shift choice behavior away from immediate and risky rewards, we tested whether training executive cognitive function could influence choice behavior and brain responses. In this randomized controlled trial, 128 young adults (71 male, 57 female) participated in 10 weeks of training with either a commercial web-based cognitive training program or web-based video games that do not specifically target executive function or adapt the level of difficulty throughout training. Pretraining and post-training, participants completed cognitive assessments and functional magnetic resonance imaging during performance of the following validated decision-making tasks: delay discounting (choices between smaller rewards now vs larger rewards in the future) and risk sensitivity (choices between larger riskier rewards vs smaller certain rewards). Contrary to our hypothesis, we found no evidence that cognitive training influences neural activity during decision-making; nor did we find effects of cognitive training on measures of delay discounting or risk sensitivity. Participants in the commercial training condition improved with practice on the specific tasks they performed during training, but participants in both conditions showed similar improvement on standardized cognitive measures over time. Moreover, the degree of improvement was comparable to that observed in individuals who were reassessed without any training whatsoever. Commercial adaptive cognitive training appears to have no benefits in healthy young adults above those of standard video games for measures of brain activity, choice behavior, or cognitive performance.

SIGNIFICANCE STATEMENT Engagement of neural regions and circuits important in executive cognitive function can bias behavioral choices away from immediate rewards. Activity in these regions may be enhanced through adaptive cognitive training. Commercial brain training programs claim to improve a broad range of mental processes; however, evidence for transfer beyond trained tasks is mixed. We undertook the first randomized controlled trial of the effects of commercial adaptive cognitive training (Lumosity) on neural activity and decision-making in young adults (N = 128) compared with an active control (playing on-line video games). We found no evidence for relative benefits of cognitive training with respect to changes in decision-making behavior or brain response, or for cognitive task performance beyond those specifically trained.

In this first randomized controlled trial of the effects of adaptive cognitive training on choice behavior and neural responses, 128 young adults received 10 weeks of a web-based computerized intervention, consisting of either commercially available adaptive cognitive training or control training using computer games delivered in the same manner. The control training was designed to account not just for nonspecific placebo and social desirability effects, but also for two components believed to be critical to efficacy of adaptive cognitive training (Morrison and Chein, 2011; Shipstead et al., 2012). Unlike cognitive training, control games were not explicitly designed to tax executive functions and were not adaptive (i.e., difficulty levels were not adjusted over the course of training to users' current level of performance). All participants completed cognitive assessments pretraining and post-training, as well as functional magnetic resonance imaging (fMRI) during performance of delay discounting and risk sensitivity tasks. We hypothesized that cognitive training would enhance cognitive control processes and bias decision-making and neural activity away from choices of immediate or risky rewards.

Materials and Methods

All procedures were approved by the University of Pennsylvania Institutional Review Board. This trial was registered at clinicaltrials.gov as Clinical trial reg. no. NCT01252966.

Participants and eligibility

Individuals between 18 and 35 years of age who reported home computer and internet access could participate. Three hundred ninety-five participants provided informed consent and completed an in-person eligibility screen. The in-person eligibility screen included a brief IQ test to identify those with low/borderline intelligence (score of <90 on Shipley Institute of Living Scale, n = 10; Zachary, 1986), an fMRI safety form to assess fMRI contraindications (n = 22), and baseline assessments of delay discounting and risk sensitivity. Participants exhibiting extreme choice behavior were not eligible to be randomized (discount rate, k < 0.0017, n = 34; discount rate, k > 0.077, n = 7; risk sensitivity, α < 0.34, n = 36; or risk sensitivity, α > 1.32, n = 16; both k and α out of range, n = 6; technical error, n = 2). These criteria were chosen based on previous work in our laboratory and were the estimated 10th and 90th percentiles of the normal range in discount rate and the 5th and 95th percentiles of the normal range in risk sensitivity. The purpose of this exclusion was to minimize potential ceiling and floor effects on the behavioral outcomes and to ensure engagement during the scanning tasks. The scanning tasks asked the same questions of every participant and were designed to be sensitive to changes in discount rate or risk sensitivity in a wide range of participants; excluded participants fell outside of this range and would have chosen all or nearly all of one type of option on one of the scanning tasks. Other exclusion criteria were as follows: self-reported history of neurological, psychiatric, or addictive disorders (excluding nicotine), positive breath alcohol reading (>0.01), color blindness, left-handedness, and claustrophobia (n = 11). Eligible participants completed a 1 week “run-up” period to screen for noncompliance. During this week, they were instructed to complete games from the control training 5 times/week for 30 min/d. Those who completed fewer than four sessions were not randomized (n = 54); nor were those who did not complete the pretreatment scan visit (n = 31).

Eligible participants (n = 166) were randomized to condition in blocks of 4 (n = 84 to the cognitive training group and n = 82 to the active control group). Thirty-eight participants (22.9%) were lost to follow-up (20 participants in cognitive training group, 18 participants in active control group); these individuals were younger (mean age, 23 vs 25 years; p = 0.002) and less likely to have completed college (p = 0.02). Thus, the final analyzed sample for this fMRI-based clinical trial included 128 participants (cognitive training group, 64 participants; active control group, 64 participants).

Interventions

Participants in both conditions initiated their assigned training in the week following the baseline fMRI scan (see below). All participants were instructed to complete their assigned web-based training from home 5 times/week for 30 min/session, for a total of 50 sessions over 10 weeks. Participant compliance with training was monitored electronically, and small monetary incentives were provided for completion ($5/session). Adherence was measured as the percentage of assigned sessions that were completed; partial sessions were counted if a participant completed at least 15 min of training. Participants were classified as good adherers if they completed at least 70% of assigned sessions (approximately the top two quartiles) and poor adherers if they completed <70% of assigned sessions.

Cognitive training condition.

The cognitive training condition used Lumosity, a commercially available platform (http://www.lumosity.com/). The training program consists of internet-based games that claim to train specific cognitive domains. Many games are based on traditional psychological tasks (such as the flanker task or n-back working memory task), and all are designed to be engaging. All participants were assigned identical games (supplied by Lumosity) in a standardized order that rotated among the following five cognitive domains: working memory (∼27% of games over the 10 week training period); attention (∼13%); flexibility (∼24%); problem solving (∼15%); short-term memory (∼12%); and speed (∼9%). Individual games were ∼2–3 min long (depending on participant response speed), so that a 30 min training session consisted of 10–15 games. A core aspect of cognitive training is that it is adaptive, meaning that difficulty increased progressively across sessions as performance improved. There were a total of 23 possible exercises; examples are provided in Table 1. Standardized feedback on performance was based on the LPI (see below), but participants were not taught specialized cognitive strategies for completing the games.

Active control condition.

Participants in the active control condition received an active intervention designed to account for the nonspecific effects of cognitive stimulation common to any video games or training program, such as engagement, expectancy, novelty, motivation, and contact (Motter et al., 2016). We used computer video games, which have been used as an active control for cognitive training programs in several previous studies (Kundu et al., 2013; Nouchi et al., 2013). Video games were developed by the Drexel University RePlay Lab (http://replay.drexel.edu/index.html) and included a total of 40 possible games (http://drexelgames.com/); examples are provided in Table 2. Participants were not prompted to complete particular games within each session and could spend as much time on each game as they chose as long as they spent 30 min playing in total. These games were not specifically designed to tax executive functions and therefore were not expected to engage these abilities more than typical computerized games but were designed to be entertaining and engaging. Although these games can become more challenging as one progresses through the game within a session, user performance is not tracked over sessions and game difficulty is not adapted during each session to current user abilities, as in the cognitive training condition (i.e., users start from the beginning of the game each session). Both adaptive testing and the targeting of specific processes are believed to be key components of the efficacy of cognitive training (Morrison and Chein, 2011; Shipstead et al., 2012). At the same time, participants in both groups were given the same information regarding the study purpose (e.g., “we are investigating the effects of certain types of computer games on brain activity and decision-making behavior”), controlling for expectancy effects. The variety of games available in both conditions allowed each to present a novel experience. To control for motivation and contact, participants in both conditions received the same completion incentives and the same weekly phone calls to review study compliance and were blinded to their specific training condition.

Delay discounting.

Participants chose between a smaller immediate reward ($20 today) and a larger reward available after a longer delay (e.g., $40 in a month). The immediate reward was fixed, and the magnitude and delay of the larger, later reward varied from trial to trial. Each trial began with the presentation of the later option (amount and delay); the standard immediate option was not displayed. When subjects made their choice, a marker indicating their choice (checkmark if the later option was chosen, “X” if the immediate option was chosen) appeared for 1 s. Subjects had 4 s to make their choice. Subjects made 120 such choices in each session, over four 5 min and 18 s scans.

The primary behavioral outcome was discount rate (k), which was estimated by fitting a logistic regression to choice data. The subjective value (SV) of the choice options was assumed to follow hyperbolic discounting, as follows:
where A is the amount of the option, D is the delay until the receipt of the reward (for immediate choice, D = 0), and k is a discount rate parameter that varies across subjects. Higher values of k indicate greater discounting and less tolerance of delay. The proportion of smaller immediate choices was also calculated as a secondary metric of discounting, which does not make assumptions about the parametric form of discounting. A two-parameter quasi-hyperbolic model (Laibson, 1997) was also fit to these data, but as these fits yielded similar conclusions (no change in either condition in either β or δ parameters of the quasi-hyperbolic model), they are not presented in detail here.

Risk sensitivity.

Participants chose between a smaller certain reward (100% chance of $20) and a larger riskier reward (e.g., 50% chance of $40). The certain reward was fixed, and the magnitude and probability of the larger, uncertain reward varied from trial to trial. Each trial began with the presentation of the risky option (amount and probability); the standard certain option was not displayed. When subjects made their choice, a marker indicating that choice (checkmark if the risky option was chosen, “X” if the certain option was chosen) appeared for 1 s. Subjects had 4 s to make their choice. Subjects made 120 such choices in each session, over four 5 min and 18 s scans.

The primary behavioral outcome was the subject's degree of risk sensitivity (α), estimated by fitting a logistic regression to choice data. The SV of the choice options was assumed to follow a power utility function, as follows:
where p is the probability of winning amount A and α is a risk sensitivity parameter that varies across subjects. For the risky option, there is always a 1 − p chance of winning nothing. Higher α indicates a larger risk tolerance and lesser degree of risk aversion. The proportion of smaller certain choices was also calculated as a secondary metric of risk sensitivity, which does not make assumptions about the parametric form of risk aversion.

Subject-level analyses were performed using the FSL tool FEAT (FMRIB fMRI Expert Analysis Tool). Task regressors were time locked to trial onset (event duration, 0.1 s) and convolved with a canonical gamma hemodynamic response function. In one set of GLMs, parametric regressors modeling the subjective value of the variable option (larger delayed or risky option) were generated using the discount rate and risk sensitivity parameters estimated from each subject, and orthogonalized to the task regressor (Kable and Glimcher, 2007; Levy et al., 2010). In a second set of GLMs, categorical regressors modeling whether the variable option (larger delayed or risky option) was chosen were included instead of the parametric value regressors. All GLMs included a regressor that designated missed trials; these trials were excluded from the regressors of interest.

Due to limitations in the single-step variance partitioning of FLAME (FMRIB Local Analysis of Mixed Effects), to approximate a two-group repeated-measures ANOVA, contrasts for the overall mean and the difference between pretreatment and post-treatment sessions were performed at the subject level and then carried up to the group level to analyze potential group, time (scan session), and interaction effects. One-sample t tests were then conducted to test for main effects and effects of time, and two-sample t tests were conducted to test for group and group-by-time interaction effects. Whole-brain analyses were thresholded at p < 0.001 and then corrected at the cluster level for multiple comparisons (p < 0.05) through permutation testing using cluster mass as implemented in the FSL tool Randomize (Winkler et al., 2014). Higher-power region of interest (ROI) analyses were also conducted in the dlPFC, vmPFC, and VS. The dlPFC ROI (123 voxels at 2 × 2 × 2 mm; 6.2 mm spherical kernel, centered on MNI coordinates −43, 10, and 29) was based on a meta-analysis identifying overlap between working memory and delay discounting activations (Wesley and Bickel, 2014). The vmPFC and VS ROIs were based on a meta-analysis of value-related neural signals (Bartra et al., 2013). ROI analyses were corrected for multiple comparisons (3 ROIs × 2 tasks × 2 regressors) using Bonferroni's method.

Fourteen participants were excluded from the neuroimaging analyses due to excessive in-scanner motion (>5% of image-to-image relative mean displacements >0.5 mm, n = 4), excessive missed trials (>10% nonresponses in a single run for more than two runs within a session, n = 6), incomplete or corrupted data (>25% unusable runs within a single session, n = 3), or expressed knowledge of experimental conditions (i.e., active control versus cognitive training, n = 1). Thus, 114 subjects were included in the final analyses of the task fMRI data (mean age, 25.1 years; 51 women overall; cognitive training group, 56 subjects).

Visual/spatial n-back (working memory).

During the n-back, participants are instructed to remember the location of a stimulus, a gray circle that is ∼5 cm in diameter, as it appears randomly in eight possible locations around the perimeter of a computer screen. The stimulus appears for 200 ms, followed by an interstimulus interval (ISI) of 2800 ms. A crosshair remains visible during the stimulus presentation to cue participants to look at the center of the screen so that all stimuli appearing around the perimeter of the screen can be seen clearly. The n-back task includes four conditions of varying difficulty levels, as follows: the 0-back, 1-back, 2-back, and 3-back. Participants respond only to targets (25% of stimuli) by pressing the SPACEBAR (Green et al., 2005; Owen et al., 2005; Ehlis et al., 2008). The primary outcomes are number correct and correct response time.

Penn continuous performance test (visual attention and vigilance).

This task is based on the Penn continuous performance test (CPT; Kurtz et al., 2001). In this task, a series of red vertical and horizontal lines (seven segment displays) flash in a digital numeric frame (resembling a digital clock). The participant must press the spacebar whenever these lines form complete numbers or complete letters. Stimuli are presented for 300 ms, followed by a fixed 700 ms ISI. The task is divided into two parts, each lasting 3 min, as follows: in the first part, the participant is requested to respond to numbers; and in the second part, the response is to letters. The primary outcomes are number correct and correct response time.

Stop signal task (response inhibition).

In this task, participants are instructed to press labeled keyboard keys as quickly and as accurately as possible to indicate the direction the arrow faced. Following a 32-trial practice, audio stop signals are presented on 25% of trials for a 32-trial practice and three task blocks of 64 trials each. The initial stop delay in each block is 250 ms and adjusts by 50 ms increments depending on whether the participant is able to successfully inhibit a response (Logan, 1994; Logan et al., 1997). The adjusting stop delay allows the determination of the delay at which inhibition occurs on ∼50% of trials. All trials consist of a 500 ms warning stimulus followed by a 1000 ms go signal (left- and right-facing arrows) and a 1000 ms blank screen intertrial interval. The primary outcome is the stop signal response time, which was calculated as the difference in mean response time on successful go trials and the mean stop delay on successful inhibition trials.

Stroop test (resistance to interference).

The Stroop test is a measure of the ability to screen out distracting stimuli (Stroop, 1935). In this task, participants view a series of words on a computer monitor and, using the keyboard, are asked to press the key associated with the color of the word rather than the word itself. Stimuli are presented and remain onscreen until the participant responds or 3.5 s have elapsed (whichever comes first), followed by a fixed 100 ms ISI. Participants are instructed to respond as quickly and accurately as possible. Congruent trials are trials in which the word and color match (e.g., the word “green” appears in the color green). Incongruent trials are trials in which the words are printed in colors that do not match the colors of the words (e.g., the word “red” might appear in green). The primary outcome is the Stroop effect, an interference score calculated as the response time on incongruent trials minus the response time on congruent trials. The Stroop effect measures the ability to suppress a habitual response in favor of an unusual one, taking into account the overall speed of naming.

Color shape task (flexibility).

In each trial of this task (Miyake et al., 2004), a cue letter (C or S) appears above a colored rectangle with a shape in it (outline of a circle or triangle). Participants are instructed to indicate whether the color is red or green when the cue is C, and whether the shape was a circle or triangle when the cue is S. The cue appears 150 ms before the stimulus, and both the cue and the stimulus remain on the screen until the participant responds. The primary outcome is the task switch cost, which is calculated as the difference in response time on switch trials (cue is different than the previous trial) versus the response time on stay trials (cue is the same as the previous trial). Smaller switch costs indicate greater cognitive flexibility.

Lumosity performance index.

To track average performance on Lumosity tasks during training, the platform generated an LPI, which is the weighted average of performance across tasks based on percentiles for a given age group. An exponential smoothing procedure is used to account for day-to-day fluctuations. The LPI was used to assess improvements on trained exercises with practice in the cognitive training condition.

Follow-up study of test–rest performance on cognitive assessments

After observing improvements on the cognitive assessments for both the active control and cognitive training groups, we performed a follow-up study to examine the effects of repeated testing with these assessments in the absence of any intervention. We recruited 35 participants between 18 and 35 years of age, excluding colorblind individuals and current users of Lumosity on-line training. These participants completed the cognitive testing battery on three occasions, separated by 1 week intervals and with no contact or intervention in the interim. Although this is a shorter delay than the pretraining, mid-training, and post-training assessments in the primary study, our primary concern was the extent of the potential practice effects, and healthy adults show similar practice gains throughout the first 3 months of serial testing (Bartels et al., 2010). Participants who completed fewer than three sessions (n = 5) or showed performance of >3 SDs from the mean on one of the cognitive tasks at the first testing session (n = 1) were excluded from the analysis. The analyzed sample (n = 29) was 69% female and had an average age of 23 years. As the no-contact control group was recruited separately, we were unable to apply methodological procedures (e.g., minimization techniques; Pocock and Simon, 1975; Scott et al., 2002) to reduce the likelihood of baseline differences. Therefore, to better compare the active control and cognitive training groups to this no-contact group, we selected a subset of participants matched on baseline cognitive composite score (see below; n = 25 for all groups). Each participant in the no-contact group was matched with their nearest unmatched neighbors among both the active control and cognitive training participants in ranked baseline performance, excluding match distances beyond a caliper of 0.1 (Stuart, 2010).

Experimental design and statistical analysis

Multiple regression models were estimated for the choice behavior and cognitive outcomes using Stata xt-reg (StataCorp) with maximum likelihood techniques. Models included terms for main and interacting effects of treatment (active control vs cognitive training) and time point (pretreatment vs post-treatment), including age, sex, and education as covariates. Delay discounting rates (k) were log transformed to normalize the distribution. Cognitive models also included the mid-treatment time point in addition to pretreatment and post-treatment; these models were examined for the full sample and separately within the sample of good adherers (≥70% of sessions completed) to determine whether engagement with the programs affected outcomes. Outliers were excluded based on pretreatment performance of >3 SDs from the mean. To form a composite cognitive performance score, z-scores were calculated separately for each of the five tasks across time points and treatment conditions (tasks for which lower values indicate improved performance were reverse scored) then averaged together within subjects for each time point. For the cognitive training group only, changes in performance on trained tasks (LPI) over time were examined using multiple regression with terms for main and interacting effects of adherence (percentage of assigned sessions completed; continuous measure) and time (day of training period), controlling for age, sex, and education. Pairwise correlation was used to identify baseline correlations between decision-making outcomes and cognitive performance.

Results

Descriptive data

The cognitive training and active control groups did not differ on any baseline variables (p values >0.05; Table 3). Overall, 44% of participants were female, 59% graduated college, and the average age was 25 years. Adherence (percentage of sessions completed) was high across both conditions, as follows: 80% (SD, 19) in the active control group and 74% (SD, 20) in the cognitive training group (F(1,126) = 3.26; p = 0.07). There were no differences between the cognitive training and active control groups in pretreatment delay discounting (cognitive training group: mean logk, −1.82; range, −3.07 to −0.92; active control group: mean logk, −1.79; range, −3.07 to −1.06; F(1,126) = 0.13; p = 0.72) or risk sensitivity (cognitive training group: mean α = 0.68; range, 0.21–1.41; active control group: mean α = 0.65; range, 0.28–1.49; F(1,126) = 0.49; p = 0.49).

Decision-making task outcomes. Performance on the delay discounting and risk sensitivity tasks in each group at pretreatment and post-treatment scan sessions. In the multiple regression models, there were no treatment by time interaction effects on decision-making task performance (p values >0.5).

To examine whether participants who were higher discounters or more risk seeking may experience greater benefits from training, we performed an exploratory analysis using multiple regression to examine associations between baseline decision-making and change in decision-making, controlling for age, sex, and education. Although baseline decision-making was significantly associated with change in decision-making (all p values <0.01), these effects did not differ by treatment group (all p values >0.05). To examine the form of this interaction, we divided participants into tertiles based on their baseline decision-making. The interaction was clearly driven by regression to the mean, with the lowest discounters exhibiting a trend toward increased discount rates (change in logk, 0.11 ± 0.05; p = 0.054) and the highest discounters exhibiting a trend toward decreased discount rates (change in logk, −0.09 ± 0.05; p = 0.07), and with the most risk-averse individuals exhibiting a trend toward more risk tolerance (change in α = 0.06 ± 0.03; p = 0.02) and the most risk-tolerant individuals exhibiting a trend toward more risk aversion (change in α = −0.04 ± 0.04; p = 0.31).

Neural activity (primary outcomes)

There were no effects of condition (cognitive training vs active control) on changes in neural activity during choices (Fig. 2). In a whole-brain analysis, there was robust and widespread choice-related activity (choice vs baseline contrast) that was similar in both tasks and centered in frontal-parietal, cingular-opercular, and sensorimotor regions. There was also robust and widespread value-related activity (parametric subjective value contrast) that was similar in both tasks and centered in previously identified valuation regions (vmPFC, VS, and posterior cingulate) as well as frontal-parietal and cingular-opercular regions activated by the choice task. In the risk sensitivity task, there were increases in choice-related activity from pretreatment to post-treatment in both groups in medial prefrontal, posterior cingulate, and lateral temporal cortex, all regions associated with the “default-mode network” (Raichle et al., 2001). Critically, however, these changes over time did not differ as a function of treatment condition and, therefore, could not be attributed to an effect of cognitive training.

Whole-brain analyses of neural activity. Mean activation (choice trials vs baseline; A, B) and subjective value effects (C, D) across the whole brain, for both the delay discounting (A, C) and risk sensitivity (B, D) tasks, as well as changes in mean activation from pretreatment to post-treatment in the risk sensitivity task (E), independent of treatment condition. Subjective value effects were determined using parametric regressors based on discount rate and risk sensitivity parameters estimated from each subject and orthogonalized to the task regressor. There were no effects of treatment condition on changes in neural activity over time in either task. All brain images are height thresholded at p < 0.001 to form clusters and are corrected for multiple comparisons using permutation testing on cluster mass at p < 0.05. The 3-D brain images were generated using the surface-rendering tool Surf Ice, developed at the University of South Carolina. Source code for the program is available at www.nitrc.org/projects/surfice/.

To determine whether our whole-brain analysis missed any subtle neural effects in the brain regions we had predicted, we examined choice-related and value-related activity in dlPFC, vmPFC, and VS regions identified in previous meta-analyses (Bartra et al., 2013; Wesley and Bickel, 2014). We had hypothesized that cognitive training would enhance activity in dlPFC in both tasks, leading to enhanced vmPFC/VS activity for delayed rewards and reduced vmPFC/VS activity for risky rewards. However, there were no main effects of testing session or treatment condition, or effects of treatment condition on changes in neural activity in these more sensitive ROI analyses (Fig. 3).

When we examined categorical differences in activity depending on whether the variable (larger delayed or risky) option was selected or not, we again observed robust and widespread increases in frontal-parietal and cingular-opercular regions when the variable option was selected, but there were no changes in these effects from pretreatment to post-treatment, and no treatment by condition interactions, in either the whole-brain or ROI analyses.

Practice effects on cognitive measures. A, Composite cognitive performance scores (averaged z-scores across all five cognitive tests) by treatment group and testing session. There were significant main effects of treatment (participants in the no-contact control group scored lower than the other two groups at all sessions; p = 0.02) and testing session (participants in all conditions improved over time; p < 0.0001), but there was no treatment by session interaction effect (p = 0.85). B, Matching subsets of participants on baseline performance. There were significant effects of testing session (p < 0.0001), but there were no main effects of treatment (p = 0.64) or a treatment by session interaction (p = 0.86).

Practice effects on cognitive measures (secondary outcomes)

Participants in the follow-up study were slightly younger (mean age, 23 years vs 25 years in primary study; p = 0.01) and more likely to be female (69% vs 44% in primary study; p = 0.01). Age and sex were included as covariates in the analysis; however, neither was associated with task performance. Composite cognitive scores increased across the three sessions to an extent similar to that observed in the active control and cognitive training groups (Fig. 4). In an analysis comparing this group with the active control and cognitive training groups, there were significant effects of testing session (β = 0.19; 95% CI, 0.15–0.23; Wald χ2(1) = 81.47; p < 0.0001) and treatment condition (β = 0.13; 95% CI, 0.02–0.24; Wald χ2(1) = 5.37; p = 0.02), but there was no treatment by time interaction effect (Wald χ2(4) = 1.38; p = 0.85; Fig. 4, left). Given the significant effect of treatment condition, we further examined subsets of each group matched on baseline cognitive composite. In these matched subsets, there was a significant effect of testing session (Wald χ2(1) = 56.43; p < 0.0001), but there was no effect of treatment condition (Wald χ2(1) = 0.22; p = 0.64) and no treatment by time interaction (Wald χ2(4) = 1.32; p = 0.86; Fig. 4, right).

Performance on trained tasks in the cognitive training group

Performance on the training tasks in the cognitive training condition was measured with the LPI. Over the training period, LPI increased in the cognitive training group by an average of 390.8 points (SD, 222.2). This increase was correlated with adherence, such that participants who completed more sessions continued to improve throughout the training period, whereas participants who completed fewer sessions plateaued over time (Fig. 5; adherence by time interaction effect: β = 0.02; Wald χ2(1) = 19.18; p < 0.0001). A similar analysis could not be completed in the active control condition.

Performance over time in cognitive training group. Performance on trained tasks over time in the cognitive training group, grouped by adherence to the training schedule. In the multiple regression model, there was a significant adherence (continuous measure) by time interaction effect (β = 0.02, p < 0.001). For simplicity, adherence is graphed by tertile based on the percentage of assigned sessions that were completed (low adherence, <74% completed; moderate adherence, 75–88% completed; high adherence, 89–100% completed).

Discussion

Motivated by findings that adaptive cognitive training alters activity in brain regions associated with cognitive control (Olesen et al., 2004; Dahlin et al., 2008; Takeuchi et al., 2011; Jolles et al., 2013) and that the engagement of these regions can bias choices away from immediate and risky rewards (Knoch et al., 2006; DelParigi et al., 2007; Christopoulos et al., 2009; Gianotti et al., 2009; Hare et al., 2009; Kober et al., 2010; Hare et al., 2011), we hypothesized that cognitive training would alter neural activity during decision-making, reduce delay discounting, and increase risk sensitivity. We conducted a randomized, controlled trial of commercial adaptive cognitive training versus control training involving nonadaptive, nontargeted computer games in healthy young adults. Contrary to our hypotheses, we found no effects of cognitive training on brain activity during decision-making and no effects of cognitive training on delay discounting or risk sensitivity. We did observe a baseline association between working memory and delay discounting. If the effects of cognitive training did transfer beyond the trained tasks, one would therefore expect that improvement on measures of working memory would result in changes in delay discounting. Although participants in the commercial training condition did improve with practice on the specific tasks performed during training, both conditions showed similar improvement on standardized cognitive measures over time, and similar levels of improvement were observed in a follow-up study of practice effects on the cognitive measures in the absence of any intervention. These results do not support the hypothesis that cognitive training results in transfer effects beyond the trained tasks. Commercial adaptive cognitive training in young adults appears to have no effects beyond those of standard video games on neural activity, choice behavior, or cognition.

Does cognitive training alter neural activity and decision-making?

We found no effects of cognitive training on our primary behavioral measures, delay discounting, and risk sensitivity. We also found no effects of cognitive training on neural activity during decision-making. This rules out the possibility that cognitive training results in neural changes, but these neural changes are not sufficient to generate significant behavioral effects. The conclusions that cognitive training does not affect decision-making or brain activity for the most part do not depend on comparison to a control group, as there were largely no changes in these measures after cognitive training. The only changes we observed were increases in choice-related activity in default-mode regions from pretreatment to post-treatment in the risk sensitivity task, but these effects were not specific to cognitive training. These changes could represent effects of cognitive stimulation that were common across both conditions, but they might also merely represent effects of repeated exposure to the task.

Although statistical null results should always be interpreted with caution, our study is relatively high powered to detect neural changes across conditions compared with other brain-imaging studies (Buschkuehl et al., 2012; Penadés et al., 2013; Subramaniam et al., 2014; Conklin et al., 2015). Our sample size of 128 individuals (64 individuals/group) who were included in the analysis of decision-making outcomes provides 80% power to detect a moderate effect (Cohen's d, ∼0.44) with α set to 0.05 (Faul et al., 2007). The slightly smaller sample of 114 individuals with good imaging data provides 80% power to detect an effect size of d = 0.47 for the analysis of neural activity. Although it is possible that cognitive training may provide a benefit that was too small to detect in this study, the data reveal no actual difference between conditions (Fig. 1).

Our findings are of interest as they differ from a previous study reporting beneficial effects of cognitive training on delay discounting (Bickel et al., 2011). In this prior study, 27 stimulant addicts undergoing treatment for substance abuse were assigned to either working memory training or control training. In the control group, participants viewed the same working memory programs but were provided with the answers so that they did not need to engage working memory systems. The investigators observed a significant decrease in delay discounting among participants in the working memory training group, compared with a nonsignificant increase in delay discounting in the control group. This contrasts with our finding no changes in discounting in either the cognitive training or active control groups. The difference in outcomes of the two studies could be due to differences in methodology. First, the details of both the training and control conditions differed across the two studies. As discussed below, there may be differences between working memory-specific and broad-based cognitive training programs. Second, the sample size for the prior study (n = 27) is smaller than our study (n = 128). Finally, Bickel et al. (2011) examined the effects of cognitive training in stimulant addicts undergoing treatment, compared with the healthy young adults in our study. It is possible that cognitive training may be more beneficial in substance abuse, especially for addicts that are acutely trying to maintain abstinence (Loughead et al., 2010, 2015; Patterson et al., 2010; Falcone et al., 2014).

Does cognitive training affect cognitive abilities?

Participants in the cognitive training group did improve on the tasks used during training. However, participants in both the active control and cognitive training groups demonstrated similar degrees of improvement on the cognitive assessment battery, which contained measures that were not directly trained but were within the general domain of executive function targeted by the training. The lack of difference between the cognitive training and active control groups is itself of great relevance, as most cognitive training regimens, like Lumosity but unlike our active control training, use tasks inspired by classic measures of executive function and delivered in an adaptive manner. Additionally, though, participants in both the active control and cognitive training groups demonstrated no greater improvement than participants in a follow-up study who were simply retested without any intervention, suggesting that the observed improvements are due to practice with the cognitive assessments rather than a beneficial effect of computer games. Thus, our findings fit with a growing number of studies that demonstrate the effects of cognitive training on measures closely related to the training tasks (near transfer) but no effects on measures that are less closely related (far transfer; Thompson et al., 2013; Cortese et al., 2015; Lawlor-Savage and Goghari, 2016; Melby-Lervåg et al., 2016).

An important consideration in evaluating the effects of cognitive training is the control group. Unlike many previous efforts (Lampit et al., 2014; Noack et al., 2014; Bogg and Lasecki, 2015), we included an active control condition with a similar level of engagement, expectancy, novelty, motivation, and interpersonal interaction (Motter et al., 2016). Any of these factors could account for the effects of cognitive training relative to passive (no-contact) control conditions. In contrast, an active control condition isolates differences of practical or theoretical importance. It is of practical importance whether commercial training programs outperform conventional web-based video games, and it is of theoretical importance whether adaptive training provides any benefit over nonadaptive training.

Limitations

An important caveat is that the efficacy of adaptive cognitive training may vary across populations. The participants in this study were young, healthy individuals without pre-existing cognitive impairments; it is possible that these participants were already functioning at high levels and therefore would not derive much benefit from cognitive training. Participants performed very well on the cognitive tasks at baseline, scoring on average ∼90% correct on the n-back and ∼95% correct on the CPT. However, there was sufficient room for improvement, and we did detect significant improvements over time in all groups. Other studies have found beneficial effects of working memory training on measures of self-control other than delay discounting, including reduced alcohol intake among problem drinkers (Houben et al., 2011) and reduced food intake in overweight individuals (Houben et al., 2016). Therefore, our results leave open the possibility that cognitive training could have stronger effects in children, older adults, or individuals with certain clinical conditions (Rueda et al., 2005; Willis et al., 2006; Vinogradov et al., 2012; Heinzel et al., 2014).

It is also possible that different results would be found if different cognitive domains were targeted. Studies which have focused on training specific cognitive domains have most consistently found transfer effects when training working memory (Au et al., 2015). The Lumosity cognitive training platform targets multiple cognitive domains involved in executive function, an approach used by several other broad-based cognitive training programs (Owen et al., 2010; Schmiedek et al., 2010; McDougall and House, 2012; Nouchi et al., 2013). Of the training exercises assigned, ∼27% specifically targeted working memory. However, we cannot rule out that a different balance of exercises (e.g., a greater “dose” of working memory exercises) might provide different benefits. On the other hand, several studies have demonstrated links between self-control and the other domains targeted by the Lumosity program (e.g., attention and cognitive flexibility; Hofmann et al., 2012; Fleming et al., 2016; Kleiman et al., 2016). The training interval, even considering working memory exercises alone, was also longer than many previous studies (Ball et al., 2002; Nouchi et al., 2013; Oei and Patterson, 2013; Noack et al., 2014), making it less likely that a null effect was due to an insufficient dose of training.

(2001) Comparison of the continuous performance test with and without working memory demands in healthy controls and patients with schizophrenia. Schizophr Res48:307–316.doi:10.1016/S0920-9964(00)00060-8pmid:11295383

(2016) Working memory training does not improve performance on measures of intelligence or other measures of “far transfer”: evidence from a meta-analytic review. Perspect Psychol Sci11:512–534.doi:10.1177/1745691616635612pmid:27474138

(2016) Academic outcomes 2 years after working memory training for children with low working memory: a randomized clinical trial. JAMA Pediatr170:e154568.doi:10.1001/jamapediatrics.2015.4568pmid:26954779