Posted
by
kdawson
on Sunday September 20, 2009 @03:10PM
from the as-we-may-think dept.

AthanasiusKircher sends in a Wired writeup on what should surely be a contender in the next Improbable Research competition: wiring a dead salmon into an fMRI machine and showing it pictures of humans designed to evoke various emotions. "When they got around to analyzing the voxel... data, the voxels representing the area where the salmon's tiny brain sat showed evidence of activity. In the fMRI scan, it looked like the dead salmon was actually thinking about the pictures it had been shown. ... The result is completely nuts — but that's actually exactly the point. [Neuroscientist Craig] Bennett... and his adviser, George Wolford, wrote up the work as a warning about the dangers of false positives in fMRI data. They wanted to call attention to ways the field could improve its statistical methods. ... Bennett notes: 'We could set our threshold [of significance] so high that we have no false positives, but we have no legitimate results.... We could also set it so low that we end up getting voxels in the fish's brain. It's the fine line that we walk.'" The research has been turned down by several publications, according to Wired, but a poster is available (PDF).

They're definitely on track for an igNobel prize. Using a red herring instead of the salmon would have made it a near certainty. A kipper would normally be the best choice, apart from the lack of a head/brain.

I am astonished that dead fish, or even live fish for that matter, would be interested in any way in pictures of people. Pictures of fish, maybe. But I think salmon, rather than thinking about what fish were thinking, would be thinking, "Can I eat that?" (answer: "No, you're dead.")

This story makes me reconsider my zeal to see Terri Schiavo die. If she was indeed experiencing brain activity despite her handicap, surely she would be considered more alive than a dead salmon.

Our consciousness is all just a series of nerve impulses and chemical reactions. If Terri was experiencing these reactions and impulses, I hate to say it, but we may have killed a human being and not just a vegetable.

This is not about brain activity post-mortem! This is about the stupidity of some fMRI data. This is about the voodoo correlations that come out of fMRI data [slashdot.org] in popular research that is peer reviewed. They did this to prove a point, not claim dead fish think. Even if we did, I could use your logic to claim that every time we bury a dead person we are burying them with cognitive abilities -- obviously not true! I thought the summary covered that very well as the paper being titled "Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction."

Even vegetables put into an MRI machine for a functional scan can show some 'brain activity', simply because the fMRI doesn't actually show 'brain activity', it (in its typical configuration) shows blood oxygenation concentration levels in various places in the brain. The real problem is translating increasing or decreasing levels of oxygenation into brain activity. That's precisely what this study is showing: even a dead fish has changing brain-blood oxygenation levels. You need to remember to do the science and the math part of the problem, and make sure that the statistics are really showing meaningful relations.

The question remains as to what functionality is required to call a person "alive" or "brain dead". If you want to be as absolutely conservative as possible, anyone with a beating heart and working brain stem (corneal reflexes, heart-beat signal, breathing stimuli, etc) and can be considered alive, even if their entire frontal lobe has been entirely caved in removing any wisp of humanity and they aren't even capable of controlling their bowels or bladder or many other autonomic or homeostatic functions. Whether you think it's cruel to pull the plug on someone in this state is entirely up to personal beliefs and/or religious convictions. Medicine tries not to tread too deeply into this water, simply because it's not worth it to rehash the ethical dilemmas with no new science to change anyone's opinion. We leave it up to the individuals (through advanced directives, living wills, etc) and their families to choose.

Just don't be fooled into thinking that scattered activity in a bundle of nerves we happen to call a brain necessarily means she's "alive".

"That's precisely what this study is showing: even a dead fish has changing brain-blood oxygenation levels."

No, they're showing that the noise inherent in the scan can be taken for signal if you aren't careful with your stats. The dead fish is NOT exhibiting varying blood oxygenation levels.

Even the worst fMRI experiments that get published use a repetitive design, or equivalent. The simplest setup is to administer a stimulus or have the subject do something then stop, then do it again then stop, then do it again, etc. When you're done, you look for signals that vary in tandem with the stimulus.

A dead fish's brain does NOT have blood oxygenation levels that vary in that way. For the purposes of the experiment they're basically constant. However, if you look at enough different measurements, the noise superimposed on that static signal will correlate with the stimulus.

The fish is just for laughs. They could have easily done the same thing with a jar of agar.

"A dead fish's brain does NOT have blood oxygenation levels that vary in that way." have not RTFM, but surely a decomposing body Will have varying levels of various chemicals in it? So the question is not 'are we getting data from this specimen?', but 'what is causing the specimen's readings to change?'

But this study showed that dead salmon can show just as much brain activity as Terri Schaivo...This study just shows that a "dead" organism with a brain that hasn't yet decomposed can still support some processes.

Bzzzt. Wrong. The entire point the write up was to warn about the danger of false positives. Your attributing of brain activity to random, natural noise is exactly the danger they want to avoid.

Given that psychotherapists are often accused of having at least as many problems as their patients, and that actually resolving issues cuts off an income stream, there's an argument that undead salmon might be more honest than a lot of practicing therapists.

[..] it looked like the dead salmon was actually thinking about the pictures it had been shown.... The result is completely nuts -- [...] as a warning about the dangers of false positives [...]

Looks to me like the dark matter syndrome: "Our theories wrong? Our calculations off by an insane amount? Unpossible! That can never be. Nature must be lying!"

Has anyone even checked if a dead brain can still have flows of energy through its brain? I mean light patterns still reach the retinas, and can still trigger signals, depending on the state of the neurons there. How long was that salmon dead? I know that pigs can be frozen to be clinically dead for long times (90+ minutes), and still be revived without much damage.

I'd at least check if there are actual signals of current going trough the brain (with an OTHER (better) instrument, before dismissing it. Every unchecked assumption is a good chance for flaw in your study. You wouldn't want it to be dismissed by peer review, because of a faulty assumption.

Looks to me like the dark matter syndrome: "Our theories wrong? Our calculations off by an insane amount? Unpossible! That can never be. Nature must be lying!"

I find it amazing that people who haven't even bothered to study the data or the reason for hypotheses like dark matter feel the need to make ass backwards comments about people who've literally dedicated their lives to it. What do you actually know about dark matter and the current state of the evidence? Do you even understand it at a layman's level

What do you actually know about dark matter and the current state of the evidence? Do you even understand it at a layman's level let alone understand the insanely complex math? Have you heard of the bullet cluster? Do you know about the rotation curve of galaxies? Do you understand anything about the cosmic microwave background and its fluctuations? Do you understand the background theories you're ridiculing? Do you know why General Relativity fits the data we have collected so well? Have you even bothered to find out why scientists believe in these things?

NARRATOR: Tune in next week when the physicist says, "Oh shit! I forgot to divide by two!That changes EVERYTHING!"

The gravitational models of GR and QM are incompatible. One of them MUST be wrong. Any conclusion drawn from an incorrect theory must also be wrong, although it is certainly possible to draw incorrect conclusions from correct theory. (The latter was responsible for the discrepancy between observed and predicted neutrino counts from the sun.)

Even if we knew which model was the correct one, as was noted by another poster, all the models we have are incomplete. (Well, since models are simplifications of realit

"Do you know why General Relativity fits the data we have collected so well? Have you even bothered to find out why scientists believe in these things?"

One might well wonder, because it's certainly not because GR is philosophically compatible with the rest of 20th century science.

As a matter of hobbyist curiosity, I'm reading up on the life of Einstein and his arguments with the QM people at the moment, and the curious thing that jumps out at me is how much Einstein believed that GR was only a provisional t

You should read through The Feynman Lectures, Vol. 1, where Feynman reminds us that there is no such thing as a 'true' theory in science. All science has ever done is to describe natural phenomena as well as possible. This isn't a flaw of science, and it's not even a shortcoming. The best theories we have are at best approximations, and any new theory we'll ever cook up will only be better, more sophisticated approximations. GR and QFT are as 'true' as we need them to be in the overwhelming majority of case

A short, probably wrong explanation: two clusters of galaxies collided with each other. By analyzing the emissions of the resultant impact, we can see where all the baryonic (normal) matter went - baryonic matter smacking into other baryonic matter produces energetic particles like x-rays, which we can see. However, by examining the gravitational lensing caused by these two galaxies, we can determine where most of the mass went - and it's really far off from the center of the baryonic matter. Indeed, it looks like most of the mass of each galaxy did not interact electromagnetically with the mass of the other.

The theory of dark matter explains this really well. Baryonic matter interacts electromagnetically with other baryonic matter, and so when the bullet cluster hit, its baryons slowed down (like a bullet flying through water). However, dark matter does not interact electromagnetically with baryonic matter, or very much at all with other dark matter, so the dark matter components of each galaxy just kinda ignored the impact and kept on going.

Dark matter and dark energy aren't just theories that a bunch of arrogant pricks pulled out of their asses.

Ummm, actually...

We have absolutely no direct evidence of either.

We have numerous alternative theories that explain, without resorting to saying the universe
consists of 96% invisible voodoo, various anomalies such as gravitational rotation and the implied
anisotropy of the CMB.

Keep in mind that until last week, we had no direct evidence of something so basic to
modern physics as the Bohr model; before that, we had "hooked" atoms dating back to (at least)
Epicurus. Theories come and go, and without reproducible, experimental evidence, we have at best a
model that fits the data - NOT, as far too many people seem to believe, a necessarily accurate
description of objective reality.

I find it amazing that people who haven't even bothered to study the data or the reason for hypotheses like dark
matter feel the need to make ass backwards comments about people who've literally dedicated their lives to it.

The GP said no such thing. He merely hypothesized, and not without some basis in fact, that a dead fish
may well still have neural activity. Keep in mind, for several hours after death-of-the-whole (depending on
the cause, of course), the vast majority of cells in the body still work just fine.

Now, if he had said something like "how do we know gravel doesn't have neural impulses", I would agree with your
position; but we so poorly understand "death" that your ridicule reflects worse on you than on your target.

Define "direct evidence". There's no "direct evidence" that the wind exists, but you accept it does because you see it's effects. The Bullet Cluster results provide equivalent evidence for the existence of Dark Matter. I mean, you *did* look up the BC results, didn't you?

Similarly, the expansion rate of the universe is accelerating. Period. Of this there is absolutely no doubt, as we actually do have direct evidence demonstrating it. What's causing it?

Actually, there is direct evidence of some dark matter. Quite a lot of gas and dust has been found that wasn't observable before the "dark matter" hypothesis came out. Some estimates I've seen put it as high as 20% of the "missing" mass. Dark matter is matter we haven't observed yet, and it MAY (probably) have properties that make it difficult to observe.

Keep in mind that until last week, we had no direct evidence of something so basic to modern physics as the Bohr model

Sorry, but this statement alone indicate that you don't know what you are talking about.

First off, the Bohr model is wrong, we already knew that. But if you really mean the model of electron orbits, the means Quantum Electrodynamics, then it has been measured and tested and is correct to umpteenth decimal places, that you would have a hard time finding another theory that was tested even more than QED.

If you insist that only pretty pictures could mean "direct evidence" then you know nothing about actual sci

We have numerous alternative theories that explain, without resorting to saying the universe consists of 96% invisible voodoo, various anomalies such as gravitational rotation and the implied anisotropy of the CMB.

No. We don't. That's the whole point. Dark matter wasn't invented for the hell of it. Astronomers resisted it for decades. It was ultimately accepted precisely because it continued to pass observational tests and other theories didn't.

It's possible to cook up alternative theories to explain individual phenomena such as galactic rotation curves (e.g., MOND). But they all fail when you try to simultaneously explain multiple phenomena such as galactic rotation curves and CMBR anisotropies and early universe structure formation and galaxy cluster dynamics and... you get the idea.

The brain can still function when 'brain dead.' Think about it. Your entire brain doesn't die once you don't get enough oxygen for a few minutes, you just can't maintain the feedback loop called consciousness. Just because of that, doesn't mean the cells aren't still functioning. Since you're unconscious, though, you may as well be dead if you can never recover from it.

Consciousness is a rather circular loop in the brain. Minor damage to part of that loop can ensure that you never wake up, unless a path around that damage is formed, which may or may not happen. We've seen people wake up from comas after years, because their brain has formed pathways around the damage.

Then we get into the whole debate of 'what is death?' True brain death would mean that the entire brain is dead, and can never recover from it. Little pockets of cells can survive for a period of time, but they will always die in the end if they aren't getting the oxygen/energy/minerals they need. So, unconscious is dead? No, it's just unconscious. We can distinguish between coma, sleep, death, etc. Terry Schiavo should have been considered dead, since +90% of her brain was dead, but because she showed some basic brainstem functions, people said she was alive. In reality, she was less alive and less able to be revived than someone who hasn't had a pulse in ten minutes!

fMRI is a blunt instrument compared to what ultra high resolution spectroscopic MRI will show us in the future.

Current MRI is tuned to the proton nmr signal (and variations of it). As magnet technology advances and ginourmous gradients are achieved, it will be possible to obtain full spectroscopic data (chemical shift) in addition to positional data. Not only for the proton but for other isotopes that produce an NMR signal (of which all the CHONPS elements have at least one). As aquisition electronics speed

Firstly, numerous universities bundle neuroscience and related fields of engineering into their psychology department, so it seems pretty apparent that this wasn't a bunch of cognitive psych "Let'
s build a graph/model!" junk. Also, its pretty common for psychologists to hold degrees in a "hard" science as well, so your bias is probably rooted in ignorance.
Secondly, it seems to me like that their point wasn't that the fMRI wasn't sensitive enough, or particular enough. Instead the problem seems to be a

The top MRI used on humans is, I believe, 9.2 T - just good enough to see individual neurons and make out the synapses firing. The top MRI used on animals is closer to 12 T. Provided such high magnetic fields are shown to be safe, you could gather a lot of useful information on the functioning of the brain.

But I see no way of gathering continuous data at high resolution. From radio astronomy through to quantum mechanics, there's always a trade-off between resolution in space and resolution in time. The bett

Can we conclude from this data that the salmon is engaging in the
perspective-taking task? Certainly not. What we can determine is that random
noise in the EPI timeseries may yield spurious results if multiple comparisons
are not controlled for. Adaptive methods for controlling the FDR and FWER
are excellent options and are widely available in all major fMRI analysis
packages. We argue that relying on standard statistical thresholds (p 8) is an ineffective control for multiple
comparisons. We further argue that the vast majority of fMRI studies should
be utilizing multiple comparisons correction as standard practice in the
computation of their statistics.

And why wasn't this published? The very conclusion is that we should be more careful when trusting fMRI results and conduct more testing before jumping to conclusion.

And why wasn't this published? The very conclusion is that we should be more careful when trusting fMRI results and conduct more testing before jumping to conclusion.

Perhaps because what he's saying isn't new? As far as I can tell he's merely restating a substantive point that was recently made by someone else [wiley.com], which attracted substantial publicity [newsweek.com] as well as sober rebuttals [wiley.com] (along the lines of: nobody actually uses the flawed statistical methods that you're critiquing). All this guy is doing is illustratin

And why wasn't this published? The very conclusion is that we should be more careful when trusting fMRI results and conduct more testing before jumping to conclusion.

Perhaps because what he's saying isn't new? As far as I can tell he's merely restating a substantive point that was recently made by someone else [wiley.com], which attracted substantial publicity [newsweek.com] as well as sober rebuttals [wiley.com] (along the lines of: nobody actually uses the flawed statistical methods that you're critiquing). All this guy is doing is illustrating the point in an absurd and attention-grabbing way.

Fair enough, I wasn't aware of that. In that case, why the hell did I read this nonsense post?

Noundi - It most likely will be published soon. The paper is just working its way through peer review right now. The last set of journal reviewers were quite kind and had very good feedback for us to improve the Salmon story.

Yali - It is a different statistical issue than the Vul et al. non-independence error. While a great many papers have been written on how to complete multiple comparisons correction in fMRI there is still a problem in that not everyone is doing it. This leaves the door open to false

Noundi - It most likely will be published soon. The paper is just working its way through peer review right now. The last set of journal reviewers were quite kind and had very good feedback for us to improve the Salmon story.

Yali - It is a different statistical issue than the Vul et al. non-independence error. While a great many papers have been written on how to complete multiple comparisons correction in fMRI there is still a problem in that not everyone is doing it. This leaves the door open to false positives, the number of which remain unknown.

So basically while the problem has been brought to attention before, it has been ignored by some leaving some studies uncertain but considered as valid. This stunt was to prove that if you trust in the results of such studies, you might as well trust in brain activity in dead fish. I can see why some would ignore, or even get offended by it. Still it's a valid point.

I was there, I saw the poster, it was a humorous joke meant to remind fMRI newbies to control their type I error. It was in no way publishable research and was not intended to be. Most people who do fMRI research already make the effort to do the stats correctly. Multiple comparisons correction for fMRI is old news - the authors' most recent fMRI stats citation was from 15 years ago. And no there wasn't any activity or signal change or anything else in the brain of a dead salmon.
It has nothing to do w

powrogers - Thanks for stopping by our poster last June. I like your comment quite a bit but would add one point. While the correction of multiple comparisons in fMRI has been well understood for quite some time (you mention 15 years) the current problem is that not everyone does it while conducting their research. Having a high p-value and a minimum cluster threshold is an unknown, soft control to the problem. Our argument is that true correction methods that control for the FDR or FWER should be emplo

Well, maybe what they saw wasn't a false positive? Maybe there is residual functionality of the brain some time after death, the same way you can electrically stimulate the muscles of a dead body to make them twitch. Is it that unthinkable that visual impulses have some effect on the brain, that death instantly renders every single braincell inoperable?

Well, it can't measure actual flow unless you stick a mechanical device inside the capillaries. It presumably measures something very indirect (such as change in energy state) which, in a living brain, can be linked more to blood flow than other sources.

However, there will certainly be other sources that generate identical signals. There will also be experimental errors (such as noise in the system) and observer errors (eg: mis-ascribing cause to effect).

Very seriously, this is cool research! The really sad thing here is that they have trouble publishing. This shows that the interest in medical research is less on truth and knowledge and more on stunts and commercializable results.

Well we already knew that. There's plenty of research showing journals won't publish negative studies concerning products made by key advertisers and that there's enormous pressure on researchers from the corporate sector to only ever generate favourable studies.

Now we have a paper that disses not one product or one company but EVERY product, EVERY company and EVERY study that falsely assumes a specific cause for a much more general effect. What do you think the journals are going to do? This gets published

gweihir - We are on our second round of reviews at a major neuroscience journal and things are looking good for getting it published. A lot of our trouble has come from individuals who don't want multiple comparisons correction to become a mandatory practice in functional imaging.

The poster highlights a very well-known problem in statistics that folks doing brain research are well aware of and almost always correct for. The issue is that, when you're doing a large number of statistical tests, like you are with brain imaging data, you're likely to get a lot of false positives. You can correct for this by using a very conservative significance threshold (i.e., "p-value"), directly controlling for the proportion of false positives using a statistic called the "false discovery rate," controlling for false positives via monte carlo simulation, etc. etc.

Most neuroscientists who do brain imaging are very familiar with these correction methods, and apply them with great success. If anything, neuroscientists tend to be too concerned with false positives, such that they end up actually missing real activations because they're over-correcting.

So it's actually really unfortunate that this study is getting so much popular media attention, because it's giving people the impression that researchers aren't aware of this problem and/or that that they aren't doing anything about it. That couldn't be further from the truth.

joepa - You have a lot of very good points. Most neuroscientists are aware of the multiple comparisons problem and, at minimum, try to control for it using increased statistical thresholds (high p-values) and minimum clustering values (have to have several contiguous voxels). The trouble with this approach is that it is a soft control of the multiple comparisons problem. You still have no idea of what the false positive rate will be across the whole brain, only on a quasi voxel-to-voxel basis. Using tec

Thanks for dropping in Craig, but Slashdot tends to move at such a pace that an article, unless it gets hundreds of replies, dies off quite soon. I've had the same experience trying to respond to articles that directly relate to things that I'm an expert on. Most of the time you are too late to add anything to the discussion (well, you can add to it, but nobody will read it).

owlstead - I hear you - I have been a fellow/. reader for years and have observed firsthand the waxing and waning of articles. The above post was mostly a courtesy if anyone was genuinely curious about some aspect of the poster. That and I felt somewhat compelled to post a comment - as a longtime reader it is quite an honor to see some aspect of your own work on the Slashdot main page, even if it was for a dead fish.

The need for multiple comparison corrections is standard knowledge among cognitive neuroscientists. It is actually common practice for manufacturers of MRI machines to image inanimate objects as a test of the machine. You could easily get that data, rather than imaging a dead fish. Once you know the amount of noise, it would be easier to just simulate within a statistical program to determine the effects of not correcting.
If the authors of the poster weren't aware of the value of multiple comparison corrections BEFORE they stuck a fish in the magnet, at least they learned a lesson everyone else gets in second year stat.

ardeaem - At face value you are absolutely right. The majority of cognitive neuroscientists do use multiple comparisons correction in their research. Our commentary is targeted at the remainder of researchers who continue to use uncorrected statistics. The percentage is larger than you might believe, and my co-authors and I are of the opinion that we need to get our statistical house in order for the field to mature.

Atlantic salmon is called Salmo salar in biology-speak. It is the model species of the entire order Salmoniformes. Salmon doesn't get any truer than that. Pacific species belong to the genus Oncorhynchus. They are true salmons too. "Trouts" belong to both Oncorhynchus and Salmo (and another 5 genera). Some of these trouts have anadromous forms (that is, go to the seas and return to the rivers to spawn), for instance, the rainbow trout (called steelhead in its anadromous form) is Oncorhynchus mykiss and the brown trout (sea trout) is Salmo trutta.

A common mistake made in discussions of taxonomy is overlooking the issue of whether closely related species taste the same. In this case, you omitted the fact that all of them are great when grilled. With a slice of lemon on the side.

No, west-coast farmed Atlantic salmon is a cheap substitute that is sometimes dyed red to disguise it as wild pacific salmon.
Fortunately, Atlantic salmon exists outside of fish-farms and is a perfectly good fish in its own right.

The mechanisms are the most important thing. What is fMRI actually measuring? It doesn't measure activity directly, since it's not built into the brain. Ergo, it measures activity indirectly by measuring something else entirely. But anything which also generates that something else will also be detected.

This is less a false positive than it is a complete confusion between direct and indirect observations. The falseness is not in the measurement but in the observer.

Sir Arthur Conan Doyle would have loved this finding, as he often has his most famous creation of Sherlock Holmes make snide remarks about the folly of poor observation and the absurdities that follow.

It detects the oxygenation of blood. The mechanism behind this is a different magnetic moment of oxygenated hemoglobin, oxygenated hemoglobin is diamagnetic vs paramagnetic while deoxygenated. This is called the BOLD effect (Blood Oxygen Level Dependent). The difference in the two conditions magnetic property affects the MRI signal lifetime in the near vicinity. This results in contrast developing between tissues with oxygenated blood vs tissue with deoxygenated blood. The idea behind fMRI is that when you use a certain part of the brain, it requires oxygenated blood, which will lead to contrast. Unfortunately, due to low overall signal strength/contrast-to-noise ratio, the image must be signal averaged. Hence if you were tapping your finger to see which part of your brain "lights up", you would have to repeat this action, and have your MRI scan be synced to your action so that the same part of the brian is being imaged over the same interval each time. It's tricky, but my understanding is that it's quite feasible. There are many other mechanisms for causing localized signal lifetime changes, without having RTFA, I can't be sure what they took under consideration.

MRI errors, it is even stated in the document. This extreme case has been selected to highlighting of whole problem of the MRI scanner, and not to show post-mortem activity as they are trying to spin this.

* Dave Lister: Sometimes I think it's cruel giving machines a personality. My mate Petersen once brought a pair of shoes with artificial intelligence. Smart Shoes, they were called. It was a neat idea. No matter how blind drunk you were, they would always get you home. Then he got ratted one night in Oslo, and woke up the next morning in Burma. See, the shoes got bored just going from his local to the flat. They wanted to see the world, man, y'know? He had a helluva job getting rid of them. No matter who he sold them to, they'd show up again the next day! He tried to shut them out, but they just kicked the door down, y'know?
* Arnold Rimmer: Is this true?
* Dave Lister: Yeah! Last thing he heard, they'd sort of, erm, robbed a car and drove it into a canal. They couldn't steer, y'see.
* Arnold Rimmer: Really?!
* Dave Lister: Yeah. Petersen was really, really blown away by it. He went to see a priest. The priest told him, he said, it was alright, and all that, and the shoes were happy, and they'd gone to heaven. Y'see, it turns out shoes have soles.

AC - The paper has been rejected once so far. I won't mention the journal, but it was rejected on an editorial basis before it reached the peer review stage. I can only conjecture regarding why the editor decided to pass on the paper, but it was not (to my knowledge) rejected for any methodological deficiencies. We are currently in the review stage at a second journal and the reviewers had no trouble with our methods, only how we argue for multiple comparisons correction without stepping on too many toes.

As an interesting aside, the poster was also rejected at first. All the peer reviewers thought is was a joke and voted to exclude it from the conference. Once it went before the program committee they realized that, even though we had an odd approach, the conclusions of our data were sound and that we had a very good point to make.

Yep. What little I remember of stats is that it is an extraordinarily delicate tool, every little theorem is couched in uber cautious qualifications. All the more reasons to be cautious of stat-based findings of math-impaired social scientists, and medicine isn't all that far ahead in terms math literacy.

venicebeach - Again, good points. The trouble is that multiple comparisons correction is not the de dacto standard in any neuroimaging journal. Some journals, like NeuroImage and HBM, have become quite good about requiring correction in the results. Still, even they are not 100%. Other journals with a lower impact factor are quite a bit worse, with uncorrected statistics used in almost 50% of the studies. So, either people know about the problem and are willingly choosing to ignore it when they publish