tag:blogger.com,1999:blog-21605329.post6476964731481594363..comments2018-03-19T13:50:29.778-07:00Comments on The Neurocritic: Does Gamma tACS Really Induce Lucid Dreaming?The Neurocritichttp://www.blogger.com/profile/08010555869208208621noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-21605329.post-12671035157583801242014-09-01T14:16:40.300-07:002014-09-01T14:16:40.300-07:00You&#39;re comparing apples and oranges - LuCiD sc...You&#39;re comparing apples and oranges - LuCiD scores from different populations - based on a misunderstanding of Figure 5 in Voss et al 2013 and Figure 3 in Voss et al 2014.<br /><br />Voss 2013 did two surveys. pg11:<br /><br />&gt; 2.4.1. Paper and pencil (Surveys Nos. 1 and 2)<br />&gt;<br />&gt; The LuCiD scale was advertised among students at Bonn University (Germany) and participants of the lucid dream group that meets weekly at Bonn University to train newcomers in lucid dreaming and to discuss dreams in general. To assure that dream reports referred to RECENT dreams, participants were asked to specify the time lag between the dream and the dream report. Only reports of recent dreams (less than 6 h since report) were included in the analysis.<br />&gt;<br />&gt; ...we also collected 117 dream reports after awakenings from REM sleep in the sleep laboratory at Bonn University. Sleep was monitored through standard polysomnography (somnomed, Germany). Participants reported to the sleep lab at 9:00 p.m. and were instrumented for PSG. Before going to bed, the LuCiD scale items were read out by an experimenter and ample time was allowed to ask questions and to clear up any misunderstandings. REM sleep awakenings were started at 3 a.m. Awakenings were made following approximately 5 min of REM sleep which was scored online. After participants narrated their dream, an experimenter read out the questionnaire items and marked the answers on the scale. All interactions and dream narratives were audiotaped. Following each awakening, participants were allowed to go back to sleep until the next REM period commenced.<br /><br />From these datasets, they built their Lucid scale.<br /><br />Figure 5 is mean ratings of subscales over lucid vs non-lucid dreams &amp; seem to based on both survey 1 &amp; 2; this masks large systematic differences between survey 1 &amp; survey 2: if you look at Fig. 4 pg17 ( https://i.imgur.com/FBRRBAW.png ), you see that the paper-pencil ratings for a lucid dream are often *much* higher than the sleep lab ratings after a lucid dream. eg &#39;control&#39; (reused in Voss 2014) has a paper-pencil rating of ~3 (eyeballing all these figures since I don&#39;t want to spend a lot of time tracking down tables &amp; supplementary information) to their lucid dreams &amp; ~0.5 to non-lucid-dreams while the sleep lab participants gave a mean of ~0.8 to their lucid dreams &amp; ~0.1 to their non-lucid-dreams! Hence when averaged together, you see something like the 2 vs 0.3 for &#39;control&#39; in Figure 5. Something about doing dream reports much later seems to lead to gross inflation of scores as compared to doing things in a sleep laboratory.<br /><br />Now, why does this matter? Because in Voss 2014, all the subjects are in a sleep laboratory (&quot;Subjects spent up to four nights at the sleep laboratory&quot;).<br /><br />So, when we want to compare the mean scores of lucid dreams in Voss 2014 to the mean scores of lucid dreams in Voss 2013, we can&#39;t compare to Figure 5. We need to compare to Figure 4&#39;s laboratory subset. Voss 2014 reports Insight, Dissociation, &amp; Control. For the 25/40hz condition, it looks like the comparison goes: 0.6 vs 3.1, 1.6 vs 1.7, 0.5 vs 0.7 (respectively) with substantial standard error around both datasets&#39; means. This is hardly as damning as portrayed...<br /><br />If we had compared apples with oranges &amp; used the paper-pencil scores, then it would look pretty bad, yeah (3.5, 1, 3), but why would we do that? People filling out paper-pencil ratings are not the same as people being woken up in a sleep lab &amp; asked how lucid the dream they were in the middle of was.gwernhttps://www.blogger.com/profile/18349479103216755952noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-79938949263795133802014-05-26T11:45:02.238-07:002014-05-26T11:45:02.238-07:00It is potentially important to note the very low c...It is potentially important to note the very low current used in this study (250uA peak to peak). I imagine this was done to reduce the chance of waking subjects. However, this is very abnormal for tACS research, in which 1000uA peak to peak is commonly used (e.g. Kar &amp; Krekelberg, 2014; JofN). It would be interesting to see if the effects are more substantial with higher currentsMichael Claytonhttps://www.blogger.com/profile/10950494843750829760noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-77181405149256380222014-05-26T06:53:30.169-07:002014-05-26T06:53:30.169-07:00On the more general issues of transcranial electri...On the more general issues of transcranial electrical stimulation (TES), I was reading this piece on WIRED: http://www.wired.com/2014/05/diy-brain-stimulation/ . <br />If you go down towards the middle you read this: &quot;It’s a rare thing for a scientist to stand up in front of a roomful of his peers and rip apart a study from his own lab. But that’s exactly what Vincent Walsh did in September at a symposium on brain stimulation at the UC Davis Center for Mind and Brain. Walsh is a cognitive neuroscientist at University College London, and his lab has done some of the studies that first made a splash in the media. One, published in Current Biology in 2010, found that brain stimulation enhanced people’s ability to learn a new number system based on made-up symbols.<br />Only it didn’t really.<br />“It doesn’t show what we said it shows; it doesn’t show what people think it shows,” Walsh said before launching into a dissection of his paper’s flaws. They ranged from the technical (guesswork about whether parts of the brain are being excited or inhibited) to the practical (a modest effect with questionable impact on any actual learning outside the lab). When he finished this devastating critique, he tore into two more studies from other high-profile labs. And the problems aren’t limited to these few papers, Walsh said, they’re endemic in this whole subfield of neuroscience. &quot;<br /><br />That has always been my impression of TES, weak, hyped findings. Walsh admits that his CB study does not show what they said it shows! Yet, they got their publication in Current Biology. Are they retracting it? Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-8809412619988753332014-05-20T02:27:46.698-07:002014-05-20T02:27:46.698-07:00It&#39;s true, Christopher Nolan has me on permane...It&#39;s true, Christopher Nolan has me on permanent retainer...The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-27263523394377506332014-05-19T03:00:05.976-07:002014-05-19T03:00:05.976-07:00I read that over the weekend and I was wondering a...I read that over the weekend and I was wondering about those sub 1 ratings of insight as well. They do look really low. Lucid dreaming is fascinating though! Imagine if you could do it reliably; then you would never go to a movie theater ever again! NC, did you get money from the entertainment industry to trash this study? :)Anonymousnoreply@blogger.com