Although the scores on some LuCiD factors were indeed significantly higher after frontal stimulation at 25 Hz (beta, actually) and/or 40 Hz (gamma) frequencies (relative to sham or other frequencies), this did not mean the dreams were technically “lucid”.

The LuCiD scale consists of 28 statements, each followed by a 6-point rating scale (0: strongly disagree, 5: strongly agree). Insight is the awareness that one is currently dreaming, Dissociation is taking a third-person perspective, and Control is control over the dream plot.

Of the eight LuCiD factors, Insight is the single most important criterion for lucid dreaming (Voss et al., 2013). However, the mean Insight score in the current study is well below that reported for lucid dreams in the earlier study used to construct the scale.

In other words, the 25 Hz and 40 Hz brain stimulation significantly increased Insight and Control, but not to the levels reported in lucid dreams (according the authors' previous definition). The definition in the present study was less stringent: “Lucidity was assumed when subjects reported elevated ratings (>mean + 2 s.e.) on either or both of the LuCiD scale factors insight and dissociation.”

Nonetheless, induced gamma band oscillations did result in a heightened perception of self-awareness during REM sleep, in particular the ability to view the ongoing dream activities as a detached observer. But don't waste your money investing in the latest neurocrap that claims to induce lucid dreaming... As Seen On Nature Neuroscience.

1 Note that tACS is different from the usual DIY tDCS (transcranial direct current stimulation). tACS is thought to modulate and entrain brain oscillations in a frequency-specific manner, although others are much more cautious in their interpretation.

5 Comments:

I read that over the weekend and I was wondering about those sub 1 ratings of insight as well. They do look really low. Lucid dreaming is fascinating though! Imagine if you could do it reliably; then you would never go to a movie theater ever again! NC, did you get money from the entertainment industry to trash this study? :)

On the more general issues of transcranial electrical stimulation (TES), I was reading this piece on WIRED: http://www.wired.com/2014/05/diy-brain-stimulation/ . If you go down towards the middle you read this: "It’s a rare thing for a scientist to stand up in front of a roomful of his peers and rip apart a study from his own lab. But that’s exactly what Vincent Walsh did in September at a symposium on brain stimulation at the UC Davis Center for Mind and Brain. Walsh is a cognitive neuroscientist at University College London, and his lab has done some of the studies that first made a splash in the media. One, published in Current Biology in 2010, found that brain stimulation enhanced people’s ability to learn a new number system based on made-up symbols.Only it didn’t really.“It doesn’t show what we said it shows; it doesn’t show what people think it shows,” Walsh said before launching into a dissection of his paper’s flaws. They ranged from the technical (guesswork about whether parts of the brain are being excited or inhibited) to the practical (a modest effect with questionable impact on any actual learning outside the lab). When he finished this devastating critique, he tore into two more studies from other high-profile labs. And the problems aren’t limited to these few papers, Walsh said, they’re endemic in this whole subfield of neuroscience. "

That has always been my impression of TES, weak, hyped findings. Walsh admits that his CB study does not show what they said it shows! Yet, they got their publication in Current Biology. Are they retracting it?

It is potentially important to note the very low current used in this study (250uA peak to peak). I imagine this was done to reduce the chance of waking subjects. However, this is very abnormal for tACS research, in which 1000uA peak to peak is commonly used (e.g. Kar & Krekelberg, 2014; JofN). It would be interesting to see if the effects are more substantial with higher currents

You're comparing apples and oranges - LuCiD scores from different populations - based on a misunderstanding of Figure 5 in Voss et al 2013 and Figure 3 in Voss et al 2014.

Voss 2013 did two surveys. pg11:

> 2.4.1. Paper and pencil (Surveys Nos. 1 and 2)>> The LuCiD scale was advertised among students at Bonn University (Germany) and participants of the lucid dream group that meets weekly at Bonn University to train newcomers in lucid dreaming and to discuss dreams in general. To assure that dream reports referred to RECENT dreams, participants were asked to specify the time lag between the dream and the dream report. Only reports of recent dreams (less than 6 h since report) were included in the analysis.>> ...we also collected 117 dream reports after awakenings from REM sleep in the sleep laboratory at Bonn University. Sleep was monitored through standard polysomnography (somnomed, Germany). Participants reported to the sleep lab at 9:00 p.m. and were instrumented for PSG. Before going to bed, the LuCiD scale items were read out by an experimenter and ample time was allowed to ask questions and to clear up any misunderstandings. REM sleep awakenings were started at 3 a.m. Awakenings were made following approximately 5 min of REM sleep which was scored online. After participants narrated their dream, an experimenter read out the questionnaire items and marked the answers on the scale. All interactions and dream narratives were audiotaped. Following each awakening, participants were allowed to go back to sleep until the next REM period commenced.

From these datasets, they built their Lucid scale.

Figure 5 is mean ratings of subscales over lucid vs non-lucid dreams & seem to based on both survey 1 & 2; this masks large systematic differences between survey 1 & survey 2: if you look at Fig. 4 pg17 ( https://i.imgur.com/FBRRBAW.png ), you see that the paper-pencil ratings for a lucid dream are often *much* higher than the sleep lab ratings after a lucid dream. eg 'control' (reused in Voss 2014) has a paper-pencil rating of ~3 (eyeballing all these figures since I don't want to spend a lot of time tracking down tables & supplementary information) to their lucid dreams & ~0.5 to non-lucid-dreams while the sleep lab participants gave a mean of ~0.8 to their lucid dreams & ~0.1 to their non-lucid-dreams! Hence when averaged together, you see something like the 2 vs 0.3 for 'control' in Figure 5. Something about doing dream reports much later seems to lead to gross inflation of scores as compared to doing things in a sleep laboratory.

Now, why does this matter? Because in Voss 2014, all the subjects are in a sleep laboratory ("Subjects spent up to four nights at the sleep laboratory").

So, when we want to compare the mean scores of lucid dreams in Voss 2014 to the mean scores of lucid dreams in Voss 2013, we can't compare to Figure 5. We need to compare to Figure 4's laboratory subset. Voss 2014 reports Insight, Dissociation, & Control. For the 25/40hz condition, it looks like the comparison goes: 0.6 vs 3.1, 1.6 vs 1.7, 0.5 vs 0.7 (respectively) with substantial standard error around both datasets' means. This is hardly as damning as portrayed...

If we had compared apples with oranges & used the paper-pencil scores, then it would look pretty bad, yeah (3.5, 1, 3), but why would we do that? People filling out paper-pencil ratings are not the same as people being woken up in a sleep lab & asked how lucid the dream they were in the middle of was.

Links to this post:

About Me

Born in West Virginia in 1980, The Neurocritic embarked upon a roadtrip across America at the age of thirteen with his mother. She abandoned him when they reached San Francisco and The Neurocritic descended into a spiral of drug abuse and prostitution. At fifteen, The Neurocritic's psychiatrist encouraged him to start writing as a form of therapy.