episodic memory

We talk about memory for ‘events’, but how does the brain decide what an event is? How does it decide what is part of an event and what isn’t? A new study suggests that our brain uses categories it creates based on temporal relationships between people, objects, and actions — i.e., items that tend to—or tend not to—pop up near one another at specific times.

Mynd:

A small study involving patients with TBI has found that the best learning strategies are ones that call on the self-schema rather than episodic memory, and the best involves self-imagination.

Sometime ago, I reported on a study showing that older adults could improve their memory for a future task (remembering to regularly test their blood sugar) by picturing themselves going through the process. Imagination has been shown to be a useful strategy in improving memory (and also motor skills). A new study extends and confirms previous findings, by testing free recall and comparing self-imagination to more traditional strategies.

The study involved 15 patients with acquired brain injury who had impaired memory and 15 healthy controls. Participants memorized five lists of 24 adjectives that described personality traits, using a different strategy for each list. The five strategies were:

think of a word that rhymes with the trait (baseline),

think of a definition for the trait (semantic elaboration),

think about how the trait describes you (semantic self-referential processing),

think of a time when you acted out the trait (episodic self-referential processing), or

imagine acting out the trait (self-imagining).

For both groups, self-imagination produced the highest rates of free recall of the list (an average of 9.3 for the memory-impaired, compared to 3.2 using the baseline strategy; 8.1 vs 3.2 for the controls — note that the controls were given all 24 items in one list, while the memory-impaired were given 4 lists of 6 items).

Additionally, those with impaired memory did better using semantic self-referential processing than episodic self-referential processing (7.3 vs 5.7). In contrast, the controls did much the same in both conditions. This adds to the evidence that patients with brain injury often have a particular problem with episodic memory (knowledge about specific events). Episodic memory is also particularly affected in Alzheimer’s, as well as in normal aging and depression.

It’s also worth noting that all the strategies that involved the self were more effective than the two strategies that didn’t, for both groups (also, semantic elaboration was better than the baseline strategy).

The researchers suggest self-imagination (and semantic self-referential processing) might be of particular benefit for memory-impaired patients, by encouraging them to use information they can more easily access (information about their own personality traits, identity roles, and lifetime periods — what is termed the self-schema), and that future research should explore ways in which self-imagination could be used to support everyday memory tasks, such as learning new skills and remembering recent events.

Topics:

Findings supporting dopamine’s role in long-term episodic memory point to a decline in dopamine levels as part of the reason for cognitive decline in old age, and perhaps in Alzheimer’s.

The neurotransmitterdopamine is found throughout the brain and has been implicated in a number of cognitive processes, including memory. It is well-known, of course, that Parkinson's disease is characterized by low levels of dopamine, and is treated by raising dopamine levels.

A new study of older adults has now demonstrated the effect of dopamine on episodic memory. In the study, participants (aged 65-75) were shown black and white photos of indoor scenes and landscapes. The subsequent recognition test presented them with these photos mixed in with new ones, and required them to note which photos they had seen before. Half of the participants were first given Levodopa (‘L-dopa’), and half a placebo.

Recognition tests were given two and six hours after being shown the photos. There was no difference between the groups at the two-hour test, but at the six-hour test, those given L-dopa recognized up to 20% more photos than controls.

The failure to find a difference at the two-hour test was expected, if dopamine’s role is to help strengthen the memory code for long-term storage, which occurs after 4-6 hours.

Individual differences indicated that the ratio between the amount of Levodopa taken and body weight is key for an optimally effective dose.

The findings therefore suggest that at least part of the reason for the decline in episodic memory typically seen in older adults is caused by declining levels of dopamine.

Given that episodic memory is one of the first and greatest types of memory hit by Alzheimer’s, this finding also has implications for Alzheimer’s treatment.

Caffeine improves recognition of positive words

Another recent study also demonstrates, rather more obliquely, the benefits of dopamine. In this study, 200 mg of caffeine (equivalent to 2-3 cups of coffee), taken 30 minutes earlier by healthy young adults, was found to improve recognition of positive words, but had no effect on the processing of emotionally neutral or negative words. Positive words are consistently processed faster and more accurately than negative and neutral words.

Because caffeine is linked to an increase in dopamine transmission (an indirect effect, stemming from caffeine’s inhibitory effect on adenosine receptors), the researchers suggest that this effect of caffeine on positive words demonstrates that the processing advantage enjoyed by positive words is driven by the involvement of the dopaminergic system.

The brain scanning study involved 60 young adults, of whom half were put under stress by having a hand immersed in ice-cold water for three minutes under the supervision of a somewhat unfriendly examiner, while the other group immersed their hand in warm water without such supervision (cortisol and blood pressure tests confirmed the stress difference).

About 25 minutes after this (cortisol reaches peak levels around 25 minutes after stress), participants’ brains were scanned while participants alternated between a classification task and a visual-motor control task. The classification task required them to look at cards with different symbols and learn to predict which combinations of cards announced rain and which sunshine. Afterward, they were given a short questionnaire to determine their knowledge of the task. The control task was similar but there were no learning demands (they looked at cards on the screen and made a simple perceptual decision).

In order to determine the strategy individuals used to do the classification task, ‘ideal’ performance was modeled for four possible strategies, of which two were ‘simple’ (based on single cues) and two ‘complex’ (based on multiple cues).

Here’s the interesting thing: while both groups were successful in learning the task, the two groups learned to do it in different ways. Far more of the non-stressed group activated the hippocampus to pursue a simple and deliberate strategy, focusing on individual symbols rather than combinations of symbols. The stressed group, on the other hand, were far more likely to use the striatum only, in a more complex and subconscious processing of symbol combinations.

The stressed group also remembered significantly fewer details of the classification task.

There was no difference between the groups on the (simple, perceptual) control task.

In other words, it seems that stress interferes with conscious, purposeful learning, causing the brain to fall back on more ‘primitive’ mechanisms that involve procedural learning. Striatum-based procedural learning is less flexible than hippocampus-based declarative learning.

Why should this happen? Well, the non-conscious procedural learning going on in the striatum is much less demanding of cognitive resources, freeing up your working memory to do something important — like worrying about the source of the stress.

Unfortunately, such learning will not become part of your more flexible declarative knowledge base.

The finding may have implications for stress disorders such as depression, addiction, and PTSD. It may also have relevance for a memory phenomenon known as “forgotten baby syndrome”, in which parents forget their babies in the car. This may be related to the use of non-declarative memory, because of the stress they are experiencing.

tags memworks:

Topics:

Two new studies provide support for the judicious use of sleep learning — as a means of reactivating learning that occurred during the day.

Back when I was young, sleep learning was a popular idea. The idea was that a tape would play while you were asleep, and learning would seep into your brain effortlessly. It was particularly advocated for language learning. Subsequent research, unfortunately, rejected the idea, and gradually it has faded (although not completely). Now a new study may presage a come-back.

In the study, 16 young adults (mean age 21) learned how to ‘play’ two artificially-generated tunes by pressing four keys in time with repeating 12-item sequences of moving circles — the idea being to mimic the sort of sensorimotor integration that occurs when musicians learn to play music. They then took a 90-minute nap. During slow-wave sleep, one of the tunes was repeatedly played to them (20 times over four minutes). After the nap, participants were tested on their ability to play the tunes.

A separate group of 16 students experienced the same events, but without the playing of the tune during sleep. A third group stayed awake, during which 90-minute period they played a demanding working memory task. White noise was played in the background, and the melody was covertly embedded into it.

Consistent with the idea that sleep is particularly helpful for sensorimotor integration, and that reinstating information during sleep produces reactivation of those memories, the sequence ‘practiced’ during slow-wave sleep was remembered better than the unpracticed one. Moreover, the amount of improvement was positively correlated with the proportion of time spent in slow-wave sleep.

Among those who didn’t hear any sounds during sleep, improvement likewise correlated with the proportion of time spent in slow-wave sleep. The level of improvement for this group was intermediate to that of the practiced and unpracticed tunes in the sleep-learning group.

The findings add to growing evidence of the role of slow-wave sleep in memory consolidation. Whether the benefits for this very specific skill extend to other domains (such as language learning) remains to be seen.

However, another recent study carried out a similar procedure with object-location associations. Fifty everyday objects were associated with particular locations on a computer screen, and presented at the same time with characteristic sounds (e.g., a cat with a meow and a kettle with a whistle). The associations were learned to criterion, before participants slept for 2 hours in a MR scanner. During slow-wave sleep, auditory cues related to half the learned associations were played, as well as ‘control’ sounds that had not been played previously. Participants were tested after a short break and a shower.

A difference in brain activity was found for associated sounds and control sounds — associated sounds produced increased activation in the right parahippocampal cortex — demonstrating that even in deep sleep some sort of differential processing was going on. This region overlapped with the area involved in retrieval of the associations during the earlier, end-of-training test. Moreover, when the associated sounds were played during sleep, parahippocampal connectivity with the visual-processing regions increased.

All of this suggests that, indeed, memories are being reactivated during slow-wave sleep.

Additionally, brain activity in certain regions at the time of reactivation (mediotemporal lobe, thalamus, and cerebellum) was associated with better performance on the delayed test. That is, those who had greater activity in these regions when the associated sounds were played during slow-wave sleep remembered the associations best.

The researchers suggest that successful reactivation of memories depends on responses in the thalamus, which if activated feeds forward into the mediotemporal lobe, reinstating the memories and starting the consolidation process. The role of the cerebellum may have to do with the procedural skill component.

The findings are consistent with other research.

All of this is very exciting, but of course this is not a strategy for learning without effort! You still have to do your conscious, attentive learning. But these findings suggest that we can increase our chances of consolidating the material by replaying it during sleep. Of course, there are two practical problems with this: the material needs an auditory component, and you somehow have to replay it at the right time in your sleep cycle.

tags memworks:

Topics:

People with a strong genetic risk of early-onset Alzheimer’s have revealed a progression of brain changes that begin 25 years before symptoms are evident.

A study involving those with a strong genetic risk of developing Alzheimer’s has found that the first signs of the disease can be detected 25 years before symptoms are evident. Whether this is also true of those who develop the disease without having such a strong genetic predisposition is not yet known.

The study involved 128 individuals with a 50% chance of inheriting one of three mutations that are certain to cause Alzheimer’s, often at an unusually young age. On the basis of participants’ parents’ medical history, an estimate of age of onset was calculated.

The first observable brain marker was a drop in cerebrospinal fluid levels of amyloid-beta proteins, and this could be detected 25 years before the anticipated age of onset. Amyloid plaques in the precuneus became visible on brain scans 15-20 years before memory problems become apparent; elevated cerebrospinal fluid levels of the tau protein 10-15 years, and brain atrophy in the hippocampus 15 years. Ten years before symptoms, the precuneus showed reduced use of glucose, and slight impairments in episodic memory (as measured in the delayed-recall part of the Wechsler’s Logical Memory subtest) were detectable. Global cognitive impairment (measured by the MMSE and the Clinical Dementia Rating scale) was detected 5 years before expected symptom onset, and patients met diagnostic criteria for dementia at an average of 3 years after expected symptom onset.

Family members without the risky genes showed none of these changes.

The risky genes are PSEN1 (present in 70 participants), PSEN2 (11), and APP (7) — note that together these account for 30-50% of early-onset familial Alzheimer’s, although only 0.5% of Alzheimer’s in general. The ‘Alzheimer’s gene’ APOe4 (which is a risk factor for sporadic, not familial, Alzheimer’s), was no more likely to be present in these carriers (25%) than noncarriers (22%), and there were no gender differences. The average parental age of symptom onset was 46 (note that this pushes back the first biomarker to 21! Can we speculate a connection to noncarriers having significantly more education than carriers — 15 years vs 13.9?).

The results paint a clear picture of how Alzheimer’s progresses, at least in this particular pathway. First come increases in the amyloid-beta protein, followed by amyloid pathology, tau pathology, brain atrophy, and decreased glucose metabolism. Following this biological cascade, cognitive impairment ensues.

The degree to which these findings apply to the far more common sporadic Alzheimer’s is not known, but evidence from other research is consistent with this progression.

It must be noted, however, that the findings are based on cross-sectional data — that is, pieced together from individuals at different ages and stages. A longitudinal study is needed to confirm.

The findings do suggest the importance of targeting the first step in the cascade — the over-production of amyloid-beta — at a very early stage.

Researchers encourage people with a family history of multiple generations of Alzheimer’s diagnosed before age 55 to register at http://www.DIANXR.org/, if they would like to be considered for inclusion in any research.

tags memworks:

Topics:

For those with the Alzheimer’s gene, higher blood pressure, even though within the normal range, is linked to greater brain shrinkage and reduced cognitive ability.

I’ve reported before on the evidence suggesting that carriers of the ‘Alzheimer’s gene’, APOE4, tend to have smaller brain volumes and perform worse on cognitive tests, despite being cognitively ‘normal’. However, the research hasn’t been consistent, and now a new study suggests the reason.

The e4 variant of the apolipoprotein (APOE) gene not only increases the risk of dementia, but also of cardiovascular disease. These effects are not unrelated. Apoliproprotein is involved in the transportation of cholesterol. In older adults, it has been shown that other vascular risk factors (such as elevated cholesterol, hypertension or diabetes) worsen the cognitive effects of having this gene variant.

This new study extends the finding, by looking at 72 healthy adults from a wide age range (19-77).

Participants were tested on various cognitive abilities known to be sensitive to aging and the effects of the e4 allele. Those abilities include speed of information processing, working memory and episodic memory. Blood pressure, brain scans, and of course genetic tests, were also performed.

There are a number of interesting findings:

The relationship between age and hippocampal volume was stronger for those carrying the e4 allele (shrinkage of this brain region occurs with age, and is significantly greater in those with MCI or dementia).

Higher systolic blood pressure was significantly associated with greater atrophy (i.e., smaller volumes), slower processing speed, and reduced working memory capacity — but only for those with the e4 variant.

Among those with the better and more common e3 variant, working memory was associated with lateral prefrontal cortex volume and with processing speed. Greater age was associated with higher systolic blood pressure, smaller volumes of the prefrontal cortex and prefrontal white matter, and slower processing. However, blood pressure was not itself associated with either brain atrophy or slower cognition.

For those with the Alzheimer’s variant (e4), older adults with higher blood pressure had smaller volumes of prefrontal white matter, and this in turn was associated with slower speed, which in turn linked to reduced working memory.

In other words, for those with the Alzheimer’s gene, age differences in working memory (which underpin so much of age-related cognitive impairment) were produced by higher blood pressure, reduced prefrontal white matter, and slower processing. For those without the gene, age differences in working memory were produced by reduced prefrontal cortex and prefrontal white matter.

Most importantly, these increases in blood pressure that we are talking about are well within the normal range (although at the higher end).

The researchers make an interesting point: that these findings are in line with “growing evidence that ‘normal’ should be viewed in the context of individual’s genetic predisposition”.

What it comes down to is this: those with the Alzheimer’s gene variant (and no doubt other genetic variants) have a greater vulnerability to some of the risk factors that commonly increase as we age. Those with a family history of dementia or serious cognitive impairment should therefore pay particular attention to controlling vascular risk factors, such as hypertension and diabetes.

This doesn’t mean that those without such a family history can safely ignore such conditions! When they get to the point of being clinically diagnosed as problems, then they are assuredly problems for your brain regardless of your genetics. What this study tells us is that these vascular issues appear to be problematic for Alzheimer’s gene carriers before they get to that point of clinical diagnosis.

tags memworks:

Topics:

A series of experiments indicates that walking through doorways creates event boundaries, requiring us to update our awareness of current events and making information about the previous location less available.

We’re all familiar with the experience of going to another room and forgetting why we’ve done so. The problem has been largely attributed to a failure of attention, but recent research suggests something rather more specific is going on.

In a previous study, a virtual environment was used to explore what happens when people move through several rooms. The virtual environment was displayed on a very large (66 inch) screen to provide a more immersive experience. Each ‘room’ had one or two tables. Participants ‘carried’ an object, which they would deposit on a table, before picking up a different object. At various points, they were asked if the object was, say, a red cube (memory probe). The objects were not visible at the time of questioning. It was found that people were slower and less accurate if they had just moved to a new room.

To assess whether this effect depends on a high degree of immersion, a recent follow-up to this study replicated the study using standard 17” monitors rather than the giant screens. The experiment involved 55 students and once again demonstrated a significant effect of shifting rooms. Specifically, when the probe was positive, the error rate was 19% in the shift condition compared to 12% on trials when the participant ‘traveled’ the same distance but didn’t change rooms. When the probe was negative, the error rate was 22% in the shift condition vs 7% for the non-shift condition. Reaction time was less affected — there was no difference when the probes were positive, but a marginally significant difference on negative-probe trials.

The second experiment went to the other extreme. Rather than reducing the immersive experience, researchers increased it — to a real-world environment. Unlike the virtual environments, distances couldn’t be kept constant across conditions. Three large rooms were used, and no-shift trials involved different tables at opposite ends of the room. Six objects, rather than just one, were moved on each trial. Sixty students participated.

Once again, more errors occurred when a room-shift was involved. On positive-probe trials, the error rate was 28% in the shift condition vs 23% in the non-shift. On negative-probe trials, the error rate was 21% and 18%, respectively. The difference in reaction times wasn’t significant.

The third experiment, involving 48 students, tested the idea that forgetting might be due to the difference in context at retrieval compared to encoding. To do this, the researchers went back to using the more immersive virtual environment (the 66” screen), and included a third condition. In this, either the participant returned to the original room to be tested (return) or continued on to a new room to be tested (double-shift) — the idea being to hold the number of spatial shifts the same.

There was no evidence that returning to the original room produced the sort of advantage expected if context-matching was the important variable. Memory was best in the no-shift condition, next best in the shift and return conditions (no difference between them), and worst in the double shift condition. In other words, it was the number of new rooms entered that appears to be important.

This is in keeping with the idea that we break the action stream into separate events using event boundaries. Passing through a doorway is one type of event boundary. A more obvious type is the completion of an action sequence (e.g., mixing a cake — the boundary is the action of putting it in the oven; speaking on the phone — the boundary is the action of ending the call). Information being processed during an event is more available, foregrounded in your attention. Interference occurs when two or more events are activated, increasing errors and sometimes slowing retrieval.

All of this has greater ramifications than simply helping to explain why we so often go to another room and forget why we’re there. The broader point is that everything that happens to us is broken up and filed, and we should look for the boundaries to these events and be aware of the consequences of them for our memory. Moreover, these contextual factors are important elements of our filing system, and we can use that knowledge to construct more effective tags.

tags memworks:

Another study has come out supporting the idea that negative stereotypes about aging and memory affect how well older adults remember. In this case, older adults reminded of age-related decline were more likely to make memory errors.

In the study, 64 older adults (60-74; average 70) and 64 college students were compared on a word recognition task. Both groups first took a vocabulary test, on which they performed similarly. They were then presented with 12 lists of 15 semantically related words. For example, one list could have words associated with "sleep," such as "bed," "rest," "awake," "tired" and "night" — but not the word “sleep”. They were not told they would be tested on their memory of these, rather they were asked to rate each word for pleasantness.

They then engaged in a five-minute filler task (a Sudoku) before a short text was read to them. For some, the text had to do with age-related declines in memory. These participants were told the experiment had to do with memory. For others, the text concerned language-processing research. These were told the experiment had to do with language processing and verbal ability.

They were then given a recognition test containing 36 of the studied words, 48 words unrelated to the studied words, and 12 words related to the studied words (e.g. “sleep”). After recording whether or not they had seen each word before, they also rated their confidence in that answer on an 8-point scale. Finally, they were given a lexical decision task to independently assess stereotype activation.

While young adults showed no effects from the stereotype manipulation, older adults were much more likely to falsely recognize related words that had not been studied if they had heard the text on memory. Those who heard the text on language were no more likely than the young adults to falsely recognize related words.

Note that there is always quite a high level of false recognition of such items: young adults, and older adults in the low-threat condition falsely recognized around half of the related lures, compared to around 10% of unrelated words. But in the high-threat condition, older adults falsely recognized 71% of the related words.

Moreover, older adults’ confidence was also affected. While young adults’ confidence in their false memories was unaffected by threat condition, older adults in the high-threat condition were more confident of their false memories than older adults in the low-threat condition.

The idea that older adults were affected by negative stereotypes about aging was supported by the results of the lexical decision task, which found that, in the high-threat condition, older adults responded more quickly to words associated with negative stereotypes than to neutral words (indicating that they were more accessible). Young adults did not show this difference.

Topics:

tags development:

Whether corrections to students’ misconceptions ‘stick’ depends on the strength of the memory of the correction.

Students come into classrooms filled with inaccurate knowledge they are confident is correct, and overcoming these misconceptions is notoriously difficult. In recent years, research has shown that such false knowledge can be corrected with feedback. The hypercorrection effect, as it has been termed, expresses the finding that when students are more confident of a wrong answer, they are more likely to remember the right answer if corrected.

This is somewhat against intuition and experience, which would suggest that it is harder to correct more confidently held misconceptions.

A new study tells us how to reconcile experimental evidence and belief: false knowledge is more likely to be corrected in the short-term, but also more likely to return once the correction is forgotten.

In the study, 50 undergraduate students were tested on basic science facts. After rating their confidence in each answer, they were told the correct answer. Half the students were then retested almost immediately (after a 6 minute filler task), while the other half were retested a week later.

There were 120 questions in the test. Examples include: What is stored in a camel's hump? How many chromosomes do humans have? What is the driest area on Earth? The average percentage of correct responses on the initial test was 38%, and as expected, for the second test, performance was significantly better on the immediate compared to the delayed (90% vs 71%).

Students who were retested immediately gave the correct answer on 86% of their previous errors, and they were more likely to correct their high-confidence errors than those made with little confidence (the hypercorrection effect). Those retested a week later also showed the hypercorrection effect, albeit at a much lower level: they only corrected 56% of their previous errors. (More precisely, on the immediate test, corrected answers rose from 79% for the lowest confidence level to 92% for the highest confidence. On the delayed test, corrected answers rose from 43% to 70% on the second highest confidence level, 64% for the highest.)

In those instances where students had forgotten the correct answer, they were much more likely to reproduce the original error if their confidence had been high. Indeed, on the immediate test, the same error was rarely repeated, regardless of confidence level (the proportion of repeated errors hovered at 3-4% pretty much across the board). On the delayed test, on the other hand, there was a linear increase, with repeated errors steadily increasing from 14% to 23% as confidence level rose (with the same odd exception — at the second highest confidence level, proportion of repeated errors suddenly fell).

Overall, students were more likely to correct their errors if they remembered their error than if they didn’t (72% vs 65%). Unsurprisingly, those in the immediate group were much more likely to remember their initial errors than those in the delayed group (85% vs 61%).

In other words, it’s all about relative strength of the memories. While high-confidence errors are more likely to be corrected if the correct answer is readily accessible, they are also more likely to be repeated once the correct answer becomes less accessible. The trick to replacing false knowledge, then, is to improve the strength of the correct information.

Thus, as recency fades, you need to engage frequency to make the new memory stronger. So the finding points to the special need for multiple repetition, if you are hoping to correct entrenched false knowledge. The success of immediate testing indicates that properly spaced retrieval practice is probably the best way of replacing incorrect knowledge.