I reported recently on how easily and quickly we can get derailed from a chain of thought (or action). In similar vein, here’s another study that shows how easy it is to omit important steps in an emergency, even when you’re an expert — which is why I’m a great fan of checklists.

Checklists have been shown to dramatically decrease the chances of an error, in areas such as flying and medicine. However, while surgeons may use checklists as a matter of routine (a study a few years ago found that the use of routine checklists before surgery substantially reduced the chances of a serious complication — we can hope that everyone’s now on board with that!), there’s a widespread belief in medicine that operating room crises are too complex for a checklist to be useful. A new study contradicts that belief.

The study involved 17 operating room teams (anesthesia staff, operating room nurses, surgical technologists, a surgeon), who participated in 106 simulated surgical crisis scenarios in a simulated operating room. Each team was randomized to manage half of the scenarios with a set of crisis checklists and the remaining scenarios from memory alone.

When checklists were used, the teams were 74% less likely to miss critical steps. That is, without a checklist, nearly a quarter (23%) of the steps were omitted (an alarming figure!), while with a checklist, only 6% of the steps were omitted on average. Every team performed better when the checklists were available.

After experiencing these situations, almost all (97%) participants said they would want these checklists used if they experienced such a crisis if they were a patient.

It’s comforting to know that airline pilots do have checklists to use in emergency situations. Now we must hope that hospitals come on board with this as well (up-to-date checklists and implementation materials can be found at www.projectcheck.org/crisis).

For the rest of us, the study serves as a reminder that, however practiced we may think we are, forgetting steps in an action plan is only too common, and checklists are an excellent means of dealing with this — in emergency and out.

In my book on remembering intentions, I spoke of how quickly and easily your thoughts can be derailed, leading to ‘action slips’ and, in the wrong circumstances, catastrophic mistakes. A new study shows how a 3-second interruption while doing a task doubled the rate of sequence errors, while a 4s one tripled it.

The study involved 300 people, who were asked to perform a series of ordered steps on the computer. The steps had to be performed in a specific sequence, mnemonically encapsulated by UNRAVEL, with each letter identifying the step. The task rules for each step differed, requiring the participant to mentally shift gears each time. Moreover, task elements could have multiple elements — for example, the letter U could signal the step, one of two possible responses for that step, or be a stimulus requiring a specific response when the step was N. Each step required the participant to choose between two possible responses based on one stimulus feature — features included whether it was a letter or a digit, whether it was underlined or italic, whether it was red or yellow, whether the character outside the outline box was above or below. There were also more cognitive features, such as whether the letter was near the beginning of the alphabet or not. The identifying mnemonic for the step was linked to the possible responses (e.g., N step – near or far; U step — underline or italic).

At various points, participants were very briefly interrupted. In the first experiment, they were asked to type four characters (letters or digits); in the second experiment, they were asked to type only two (a very brief interruption indeed!).

All of this was designed to set up a situation emulating “train of thought” operations, where correct performance depends on remembering where you are in the sequence, and on producing a situation where performance would have reasonably high proportion of errors — one of the problems with this type of research has been the use of routine tasks that are generally performed with a high degree of accuracy, thus generating only small amounts of error data for analysis.

In both experiments, interruptions significantly increased the rate of sequence errors on the first trial after the interruption (but not on subsequent ones). Nonsequence errors were not affected. In the first experiment (four-character interruption), the sequence error rate on the first trial after the interruption was 5.8%, compared to 1.8% on subsequent trials. In the second experiment (two-character interruption), it was 4.3%.

The four-character interruptions lasted an average of 4.36s, and the two-character interruptions lasted an average of 2.76s.

Whether the characters being typed were letters or digits made no difference, suggesting that the disruptive effects of interruptions are not overly sensitive to what’s being processed during the interruption (although of course these are not wildly different processes!).

The absence of effect on nonsequence errors shows that interruptions aren’t disrupting global attentional resources, but more specifically the placekeeping task.

As I discussed in my book, the step also made a significant difference — for sequence errors, middle steps showed higher error rates than end steps.

All of this confirms and quantifies how little it takes to derail us, and reminds us that, when engaged in tasks involving the precise sequence of sub-tasks (which so many tasks do), we need to be alert to the dangers of interruptions. This is, of course, particularly true for those working in life-critical areas, such as medicine.

Being a woman of a certain age, I generally take notice of research into the effects of menopause on cognition. A new study adds weight, perhaps, to the idea that cognitive complaints in perimenopause and menopause are not directly a consequence of hormonal changes, but more particularly, shows that early post menopause may be the most problematic time.

The study followed 117 women from four stages of life: late reproductive, early and late menopausal transition, and early postmenopause. The late reproductive period is defined as when women first begin to notice subtle changes in their menstrual periods, but still have regular menstrual cycles. Women in the transitional stage (which can last for several years) experience fluctuation in menstrual cycles, and hormone levels begin to fluctuate significantly.

Women in the early stage of post menopause (first year after menopause), as a group, were found to perform more poorly on measures of verbal learning, verbal memory, and fine motor skill than women in the late reproductive and late transition stages. They also performed significantly worse than women in the late menopausal transition stage on attention/working memory tasks.

Surprisingly, self-reported symptoms such as sleep difficulties, depression, and anxiety did not predict memory problems. Neither were the problems correlated with hormone levels (although fluctuations could be a factor).

This seemingly contradicts earlier findings from the same researchers, who in a slightly smaller study found that those experiencing poorer working memory and attention were more likely to have poorer sleep, depression, and anxiety. That study, however, only involved women approaching and in menopause. Moreover, these aspects were not included in the abstract of the paper but only in the press release, and because I don’t have access to this particular journal, I cannot say whether there is something in the data that explains this. Because of this, I am not inclined to put too much weight on this point.

But we may perhaps take the findings as support for the view that cognitive problems experienced earlier in the menopause cycle are, when they occur, not a direct result of hormonal changes.

The important result of this study is the finding that the cognitive problems often experienced by women in their 40s and 50s are most acute during the early period of post menopause, and the indication that the causes and manifestations are different at different stages of menopause.

It should be noted, however, that there were only 14 women in the early postmenopause stage. So, we shouldn’t put too much weight on any of this. Nevertheless, it does add to the picture research is building up about the effects of menopause on women’s cognition.

While the researchers said that this effect is probably temporary — which was picked up as the headline in most media — this was not in fact investigated in this study. It would be nice to have some comparison with those, say, two or three and five years post menopause (but quite possibly this will be reported in a later paper).

While smartphones and other digital assistants have been found to help people with mild memory impairment, their use by those with greater impairment has been less successful. However, a training program developed at the Baycrest Centre for Geriatric Care has been using the power of implicit memory to help impaired individuals master new skills.

The study involved 10 outpatients, aged 18 to 55 (average age 44), who had moderate-to-severe memory impairment, the result of non-neurodegenerative conditions including ruptured aneurysm, stroke, tumor, epilepsy, closed-head injury, or anoxia after a heart attack. They all reported difficulty in day-to-day functioning.

Participants were trained in the basic functions of either a smartphone or another personal digital assistant (PDA) device, using an errorless training method that tapped into their preserved implicit /procedural memory. In this method, cues are progressively faded in such a way as to ensure there is enough information to prompt the correct response. The fading of the cues was based on the trainer’s observation of the patient’s behavior.

Participants were given several one-hour training sessions to learn calendaring skills such as inputting appointments and reminders. Each application was broken down into its component steps, and each step was given its own score in terms of how much support was needed. Support could either comprise a full explanation and demonstration; full explanation plus pointing to the next step; simply pointing to the next step; simply confirming a correct query; no support. The hour-long sessions occurred twice a week (with one exception, who only received one session a week). Training continued until the individual reached criterion-level performance (98% correct over a single session). On average, this took about 8 sessions, but as a general rule, those with relatively focal impairment tended to be substantially quicker than those with more extensive cognitive impairment.

After this first training phase, participants took their devices home, where they extended their use of the device through new applications mastered using the same protocol. These new tasks were carefully scaffolded to enable progressively more difficult tasks to be learned.

To assess performance, participants were given a schedule of 10 phone calls to complete over a two-week period at different times of the day. Additionally, family members kept a log of whether real-life tasks were successfully completed or not, and both participants and family members completed several questionnaires: one rating a list of common memory mistakes on a frequency-of-occurrence scale, another measuring confidence in dealing with various memory-demanding scenarios, and a third examining the participant's ability to use the device.

All 10 individuals showed improvement in day-to-day memory functioning after taking the training, and this improvement continued when the patients were followed up three to eight months later. Specifically, prospective memory (memory for future events) improved, and patient confidence in dealing with memory-demanding situations increased. Some patients also reported broadening their use of their device to include non-prospective memory tasks (e.g. entering names and/or photos of new acquaintances, or entering details of conversations).

It should be noted that these patients were some time past their injury, which was on average some 3 ½ years earlier (ranging from 10 months to over 25 years). Accordingly, they had all been through standard rehabilitation training, and already used many memory strategies. Questioning about strategy use prior to the training revealed that six participants used more memory strategies than they had before their injury, three hadn’t changed their strategy use, and one used fewer. Strategies included: calendars, lists, reminders from others, notebooks, day planner, placing items in prominent places, writing a note, relying on routines, alarms, organizing information, saying something out loud in order to remember it, mental elaboration, concentrating hard, mental retracing, computer software, spaced repetition, creating acronyms, alphabetic retrieval search.

The purpose of this small study, which built on an earlier study involving only two patients, was to demonstrate the generalizability of the training method to a larger number of individuals with moderate-to-severe memory impairment. Hopefully, it will also reassure such individuals, who tend not to use electronic memory aids, that these are a useful tool that they can, with the right training, learn to use successfully.

We’re all familiar with the experience of going to another room and forgetting why we’ve done so. The problem has been largely attributed to a failure of attention, but recent research suggests something rather more specific is going on.

In a previous study, a virtual environment was used to explore what happens when people move through several rooms. The virtual environment was displayed on a very large (66 inch) screen to provide a more immersive experience. Each ‘room’ had one or two tables. Participants ‘carried’ an object, which they would deposit on a table, before picking up a different object. At various points, they were asked if the object was, say, a red cube (memory probe). The objects were not visible at the time of questioning. It was found that people were slower and less accurate if they had just moved to a new room.

To assess whether this effect depends on a high degree of immersion, a recent follow-up to this study replicated the study using standard 17” monitors rather than the giant screens. The experiment involved 55 students and once again demonstrated a significant effect of shifting rooms. Specifically, when the probe was positive, the error rate was 19% in the shift condition compared to 12% on trials when the participant ‘traveled’ the same distance but didn’t change rooms. When the probe was negative, the error rate was 22% in the shift condition vs 7% for the non-shift condition. Reaction time was less affected — there was no difference when the probes were positive, but a marginally significant difference on negative-probe trials.

The second experiment went to the other extreme. Rather than reducing the immersive experience, researchers increased it — to a real-world environment. Unlike the virtual environments, distances couldn’t be kept constant across conditions. Three large rooms were used, and no-shift trials involved different tables at opposite ends of the room. Six objects, rather than just one, were moved on each trial. Sixty students participated.

Once again, more errors occurred when a room-shift was involved. On positive-probe trials, the error rate was 28% in the shift condition vs 23% in the non-shift. On negative-probe trials, the error rate was 21% and 18%, respectively. The difference in reaction times wasn’t significant.

The third experiment, involving 48 students, tested the idea that forgetting might be due to the difference in context at retrieval compared to encoding. To do this, the researchers went back to using the more immersive virtual environment (the 66” screen), and included a third condition. In this, either the participant returned to the original room to be tested (return) or continued on to a new room to be tested (double-shift) — the idea being to hold the number of spatial shifts the same.

There was no evidence that returning to the original room produced the sort of advantage expected if context-matching was the important variable. Memory was best in the no-shift condition, next best in the shift and return conditions (no difference between them), and worst in the double shift condition. In other words, it was the number of new rooms entered that appears to be important.

This is in keeping with the idea that we break the action stream into separate events using event boundaries. Passing through a doorway is one type of event boundary. A more obvious type is the completion of an action sequence (e.g., mixing a cake — the boundary is the action of putting it in the oven; speaking on the phone — the boundary is the action of ending the call). Information being processed during an event is more available, foregrounded in your attention. Interference occurs when two or more events are activated, increasing errors and sometimes slowing retrieval.

All of this has greater ramifications than simply helping to explain why we so often go to another room and forget why we’re there. The broader point is that everything that happens to us is broken up and filed, and we should look for the boundaries to these events and be aware of the consequences of them for our memory. Moreover, these contextual factors are important elements of our filing system, and we can use that knowledge to construct more effective tags.

In my book on remembering what you’re doing and what you intend to do, I briefly discuss the popular strategy of asking someone to remind you (basically, whether it’s an effective strategy depends on several factors, of which the most important is the reliability of the person doing the reminding). So I was interested to see a pilot study investigating the use of this strategy between couples.

The study confirms earlier findings that the extent to which this strategy is effective depends on how reliable the partner's memory is, but expands on that by tying it to age and conversational style.

The study involved 11 married couples, of whom five were middle-aged (average age 52), and six were older adults (average age 73). Participants completed a range of prospective memory tasks by playing the board game "Virtual Week," which encourages verbal interaction among players about completing real life tasks. For each virtual "day" in the game, participants were asked to perform 10 different prospective memory tasks — four that regularly occur (eg, taking medication with breakfast), four that were different each day (eg, purchasing gasoline for the car), and two being time-check tasks that were not based on the activities of the board game (eg, check lung capacity at two specified times).

Overall, the middle-aged group benefited more from collaboration than the older group. But it was also found that those couples who performed best were those who were more supportive and encouraging of each other.

Collaboration in memory tasks is an interesting activity, because it can be both helpful and hindering. Think about how memory works — by association. You start from some point, and if you’re on a good track, more and more should be revealed as each memory triggers another. If another person keeps interrupting your train, you can be derailed. On the other hand, they might help you fill you in gaps that you need, or even point you to the right track, if you’re on the wrong one.

In this small study, it tended to be the middle-aged couples that filled in the gaps more effectively than the older couples. That probably has a lot to do with memory reliability. So it’s not a big surprise (though useful to be aware of). But what I find more interesting (because it’s less obvious, and more importantly, because it’s more under our control) is this idea that our conversational style affects whether memory collaboration is useful or counterproductive. I look forward to results from a larger study.

I’m not at all sure why the researcher says they were “stunned” by these findings, since it doesn’t surprise me in the least, but a series of experiments into the role of imagination in creating false memories has revealed that people who had watched a video of someone else doing a simple action often remembered doing the action themselves two weeks later. In fact in my book on remembering intentions, which includes a chapter on remembering whether you’ve done something, I mention the risk of imagining yourself doing something (that you then go on to believe you have actually done it), and given all the research on mirror neurons, it’s no big step to go from watching someone doing something to remembering that you did it. Nevertheless, it’s nice to get the confirmation.

The experiments involved participants performing several simple actions, such as shaking a bottle or shuffling a deck of cards. Then they watched videos of someone else doing simple actions—some of which they had performed themselves and some of which they hadn’t. Two weeks later, they were asked which actions they had done. They were much more likely to falsely remember doing an action if they had watched someone else do it — even when they had been warned about the effect.

It seems likely that this is an unfortunate side-effect of a very useful ability — namely our ability to learn motor skills by observing others (using the aforesaid mirror neurons) — and there’s probably not a great deal we can do to prevent it happening. It’s just a reminder of how easy it is to form false memories.

Previous research has shown that older adults are more likely to incorrectly repeat an action in situations where a prospective memory task has become habitual — for example, taking more medication because they’ve forgotten they’ve already taken it. A new study has found that doing something unusual at the same time helps seniors remember having done the task. In the study, older adults told to put a hand on their heads whenever they made a particular response, reduced the level of repetition errors to that of younger adults. It’s suggested that doing something unusual, like knocking on wood or patting yourself on the head, while taking a daily dose of medicine may be an effective strategy to help seniors remember whether they've already taken their daily medications.

A study involving 42 students who were ecstasy/polydrug users has found that ecstasy, or the regular use of several drugs, affects users' prospective memory (remembering things you plan to do), even when tests are controlled for cannabis, tobacco or alcohol use. Cocaine use in particular was prominently associated with prospective memory impairment. Deficits were evident in both lab-based and self-reported measurements.

As we all know, being interrupted during a task greatly increases the chance we’ll go off-kilter (I discuss the worst circumstances and how you can minimize the risk of mistakes in my book Planning to remember). Medication errors occur as often as once per patient per day in some settings, and around one-third of harmful medication errors are thought to occur during medication administration. Now an in-depth study involving 98 nurses at two Australian teaching hospitals over 505 hours has revealed that at least one procedural failure occurred in 74.4% of administrations and at least one clinical failure in 25%. Each interruption was associated with a 12.1% increase in procedural failures and a 12.7% increase in clinical errors. Procedural failures include such errors as failure to check patient's identification, record medication administration, use aseptic technique; clinical failures such errors as wrong drug, dose, or route. Interruptions occurred in over half of the 4000 drug administrations. While most errors were rated as clinically insignificant, 2.7% were considered to be major errors — and these were much more likely to occur after interruptions, particularly after repeated interruptions. The risk of major error was 2.3% when there was no interruption; this rose to 4.7% with four interruptions. Nurse experience provided no protection against making a clinical error and was associated with higher procedural failure rates (this is common with procedural failures — expertise renders you more vulnerable, not less).

Older news items (pre-2010) brought over from the old website

A friendly reminder for HIV patients

Treating HIV requires patients to rigorously follow a medication schedule; more than most diseases, the virus easily develops a resistance to the drugs if not taken reliably. Moreover, HIV can cause brain damage, making it more difficult for some patients to remember. A device known as Jerry (more formally, the Disease Management Assistance System) flashes a light and verbally tells the patient the exact dosage and medication to take at the correct time. Of the 58 patients in a recent study, those with Jerry took their medication 80% of the time, while those without did so only 65% of the time. The difference was only significant for those with memory impairment: of the 31 memory-impaired patients, those using Jerry had a 77% adherence rate, while those without Jerry had a 57% adherence rate.

Older people with the 'Alzheimer's gene' find it harder to remember intentions

It has been established that those with a certain allele of a gene called ApoE have a much greater risk of developing Alzheimer’s (those with this allele on both genes have 8 times the risk; those with the allele on one gene have 3 times the risk). Recent studies also suggest that such carriers are also more likely to show signs of deficits in episodic memory – but that these deficits are quite subtle. In the first study to look at prospective memory in seniors with the “Alzheimer’s gene”, involving 32 healthy, dementia-free adults between ages of 60 and 87, researchers found a marked difference in performance between those who had the allele and those who did not. The results suggest an exception to the thinking that ApoE status has only a subtle effect on cognition.

'Imagination' helps older people remember to comply with medical advice

A new study suggests a way to help older people remember to take medications and follow other medical advice. Researchers found older adults (aged 60 to 81) who spent a few minutes picturing how they would test their blood sugar were 50% more likely to actually do these tests on a regular basis than those who used other memory techniques. Participants were assigned to one of three groups. One group spent one 3-minute session visualizing exactly what they would be doing and where they would be the next day when they were scheduled to test their blood sugar levels. Another group repeatedly recited aloud the instructions for testing their blood. The last group were asked to write a list of pros and cons for testing blood sugar. All participants were asked not to use timers, alarms or other devices. Over 3 weeks, the “imagination” group remembered 76% of the time to test their blood sugar at the right times of the day compared to an average of 46% in the other two groups. They were also far less likely to go an entire day without testing than those in the other two groups.

Alcohol damages day-to-day memory function

A new study involving 763 participants (465 female, 298 males) used self-report questionnaires: the Prospective Memory Questionnaire (PMQ), the Everyday Memory Questionnaire (EMQ), and the UEL (University of East London) Recreational Drug Use Questionnaire, and found that heavy users of alcohol reported making consistently more errors than those who said that they consumed little or no alcohol. More specifically, those who reported higher levels of alcohol consumption were more likely to miss appointments, forget birthdays and pay bills on time (prospective memory), as well as more problems remembering whether they had done something, like locking the door or switching off the lights or oven, or where they had put items like house keys. The study also found a significant increase in reported memory problems by people who claimed to drink between 10 and 25 units each week in comparison to non-drinkers – this is within the ’safe drinking’ limits suggested by U.K. government guidelines.