Comments on: Neuroscience Cannae Do It Cap’n, It Doesn’t Have the Powerhttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/
A science salon hosted by National Geographic MagazineTue, 31 Mar 2015 13:09:25 +0000hourly1http://wordpress.org/?v=4.1.1By: Robert Coxhttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-47476
Tue, 30 Apr 2013 22:54:25 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-47476On the other hand, Javier Gonzalez-Castillo’s results show that increasing the power of an fMRI study leads to the entire gray matter of the brain being “active” = significantly correlated with the task timing. This is hard to interpret — which is what he’s up to now. And it is hard and confusing.
]]>By: Brain Molecule Marketinghttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45674
Fri, 12 Apr 2013 14:02:12 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45674“Information is expensive” and “All science is wrong, some just less so.”

]]>By: mcgeorgehttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45551
Thu, 11 Apr 2013 18:48:39 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45551Nice. But title should read that Neuroscience cannae do it, cap’n, because it disnae have (or hae) the power
]]>By: Liehttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45535
Thu, 11 Apr 2013 16:29:38 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45535In regards to your last point concerning grant committees deciding the sample size is too large, the same thing goes for clinical ethics committees. Since there are currently no accurate tests that say “for x Power test x Participants”, ethics committees have nothing to go on. So they insist on the least number of participants possible, which could explain why a lot of fMRI studies in clinical populations have n<10 (in combination with time, money and difficulties getting participants of course).
]]>By: Ralph Dratmanhttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45390
Wed, 10 Apr 2013 17:02:08 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45390If chronically underpowered studies typically cannot be replicated, then using more animals in an attempt to replicate small exploratory studies will typically sacrifice a lot of animals to no great purpose. As an alternative, I suggest designing experiments and using evolutionary meta-strategies that produce sharper results. Consider the history of discoveries which have verifiably led to genuine advances. For example, cancer treatments certainly are keeping patients alive longer than was possible 30 years ago. In that field, unfortunately, many subjects have had to die; nevertheless, somehow we now have better treatment protocols. Ask how that happened.
]]>By: Dave Nussbaumhttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45385
Wed, 10 Apr 2013 15:24:03 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45385Thanks for another clear and compelling piece to bring these problems to the forefront, Ed.

One point that you raise that should be particularly compelling to researchers is that running studies with low power means that you may be testing good ideas that are true, but discarding them as false.

The flip side, that you turn to, is the increase chance of false positives. This makes sense to someone who follows the argument, but I’m not sure it’s intuitively compelling yet in the story as it’s written. I would argue that, on an intuitive level, if you go looking for something, with a small chance of finding it, then it seems reasonable that if you find it then it’s real — and it’s particularly impressive that you found it since you had such a small chance of doing so.

What the intuition fails to grasp is that when you “find” something statistically, it’s not the same thing as finding a sunken treasure that you can hold in your hand. So what remains to be explained — and I recognize this is difficult — is why it is that low power makes a finding less likely to be true (or, if true, then probably inflated).

The authors cover this in the paper, but (understandably) it’s conveyed in a way that assumes an acquaintance with statistical distributions. I could a imagine using an example to illustrate the issue, but I can see that it’s hard to fit that into an article like this.

p.s. just saw that @soozaphone’s Guardian piece does a pretty nice job with this.

]]>By: Josh Veazeyhttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45381
Wed, 10 Apr 2013 14:43:05 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45381The disincentive in academic publishing to include caveat or to clearly label pilot studies is a major fault here. Preliminary studies and negative results do not play well in major journals, even though they could in principle play important roles in understanding the big scientific picture.

If preliminary or negative results do not get rewarded with tenure, then what is the incentive for framing these studies accurately?

The popular media are at fault here as well. They place even more importance on the big splashes, often with almost no consideration of including the proper context.

]]>By: gregorylenthttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45380
Wed, 10 Apr 2013 14:23:55 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45380now if they would just flip their basic model, maybe we could get somewhere
]]>By: mikohttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45379
Wed, 10 Apr 2013 14:17:50 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45379I should be clear, I am not saying underpowered experimental designs are not prevalent… they may well be. These authors have an interest in clininical studies in humans, but they are conflating a basic research field (neuroscience) with a couple of methods related to its medical application in humans (neurology and related specialities).

The equivalent might be identifying shortcomings in clinical cancer trials, and then declaring there are widespread statistical problems in the field of cell biology.

]]>By: John Kubiehttp://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/#comment-45378
Wed, 10 Apr 2013 13:57:39 +0000http://phenomena.nationalgeographic.com/?p=154577#comment-45378I don’t completely agree. You say “waste of taxpayer money …”. The argument can go the other way. To perform an experiment with more power costs much more: more subjects, replications, etc. The question is cost-benefit. What is the cost of underpowered studies versus the cost of over-powered studies? My sense is that underpowered studies should be published with strong caveats. When conclusions are important, they should be followed by appropriately powered studies. Underpowered studies should be considered publishable, but pilot studies.
[John, I don’t think anyone’s saying underpowered studies should never be published. As I wrote, that would spell the end for exploratory research and harm science. You and I said the same thing: clearly label these as pilot exploratory studies. And then carry out replications that are billed as such and adequately powered. – E]
]]>