tag:blogger.com,1999:blog-21605329.post2026990417276913277..comments2018-03-19T13:50:29.778-07:00Comments on The Neurocritic: Voodoo Correlations in Social NeuroscienceThe Neurocritichttp://www.blogger.com/profile/08010555869208208621noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-21605329.post-91014945102724362922016-03-15T17:06:27.161-07:002016-03-15T17:06:27.161-07:00Just curious, did any of the accused authors re do...Just curious, did any of the accused authors re do their analysis?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-9265168515385786212010-03-25T13:06:51.112-07:002010-03-25T13:06:51.112-07:00Matt - Thanks for providing the link. I posted the...Matt - Thanks for providing the link. I posted the videos here:<br /><br /><a href="http://neurocritic.blogspot.com/2010/03/voodoo-and-type-ii-debate-between-piotr.html" rel="nofollow">Voodoo and Type II: Debate between Piotr Winkielman and Matt Lieberman</a>The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-37190438715032357402010-03-24T13:24:21.899-07:002010-03-24T13:24:21.899-07:00For anyone interested, there was a public debate o...For anyone interested, there was a public debate on Voodoo Correlations last fall at the Society of Experimental Social Psychologists between Piotr Winkielman (one of the authors on the Voodoo paper) and myself (Matt Lieberman). The debate has been posted online. <br /><br />http://www.scn.ucla.edu/Voodoo&amp;TypeII.htmlMatthew Liebermanhttps://www.blogger.com/profile/17331334516303148904noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-46686591863900803272009-04-08T23:50:00.000-07:002009-04-08T23:50:00.000-07:00Anonymous of March 31, 2009 11:50 AM,You might wan...Anonymous of March 31, 2009 11:50 AM,<BR/><BR/>You might want to read Vul et al.'s rebuttal (<A HREF="http://www.psychologicalscience.org/journals/pps/4_3_inpress/Vul_reply_final.pdf" REL="nofollow">PDF</A>) to Lieberman et al. (<A HREF="http://www.psychologicalscience.org/journals/pps/4_3_inpress/Lieberman_final.pdf" REL="nofollow">PDF</A>) and Nichols et al. (<A HREF="http://www.psychologicalscience.org/journals/pps/4_3_inpress/Nichols_final.pdf" REL="nofollow">PDF</A>). While you're at it, you can take a gander at the other articles in <I><A HREF="http://www.psychologicalscience.org/journals/pps/4_3.cfm" REL="nofollow">PERSPECTIVES ON PSYCHOLOGICAL SCIENCE</A></I>, Vol. 4, Issue No. 3 (May 2009), including those by statisticians who support the arguments of Vul et al.The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-58746751373853219792009-03-31T11:50:00.000-07:002009-03-31T11:50:00.000-07:00The issue of positively biased statistics after vo...The issue of positively biased statistics after voxel selection is quite valid, but also quite well known among neuroimagers. Ed Vul performed a haphazard "meta-analysis" with cherry-picked data, and falsely accused a number of authors of scientific malfeasance. There have been three significant rebuttals written: Jabbi, Leiberman, and Nichols. Vul has replied (ratherly weakly) to the Jabbi rebuttal, but has so far ignored the others which contain a much stronger and better supported condemnation of his paper.<BR/><BR/>It is indeed disappointing that so many have jumped on the Ed Vul bandwagon without first acquainting themselves with the standards in the field, or with the work reported in the "accused" papers.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-86159290589914680482009-03-20T10:50:00.000-07:002009-03-20T10:50:00.000-07:00UCLA's ..Dr. Matthew Lieberman and Dr. Naomi Eisen...UCLA's ..Dr. Matthew Lieberman and Dr. Naomi Eisenberger's work must be thoroughly investigated! They are guilty of performing unethical and harmful social cognitive research experiments on unconsenting human subjects! <BR/><BR/>In order for Dr. Matthew Lieberman and Dr. Naomi Eisenberger to get the results for their Social Cognitive Research experiments involving long term emotional and physical pain ...abandonment from loved ones...and being socially rejected,.....they teamed up with a small group of individuals that devised a horrible plan to ruin and destroy innocent ,unconsenting and unknowledgeable patients social environments with systematical harassment! <BR/> The Systematical Harassment was created in a way to, deliberately, stir up enough problems with in a person's social life...that they would soon find themselves suffering from the very same social cognitive psychology problems that these two social psychology Doctors just happened to be investigating!!!! <BR/> Can anyone tell me , who should be contacted , so...that a deeper investigation into their unethical research methods can begin? I would sure appreciate it!<BR/> And, Thank You..So Much! For writing this paper!!!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-17467458059893207032009-03-08T17:31:00.000-07:002009-03-08T17:31:00.000-07:00I'll let interested readers read the array of coun...I'll let interested readers read the array of counterpoints to Vul for themselves (I'd first recommend those of Matt Lieberman et al, see above posts). Vul's article has been thoroughly debunked.<BR/><BR/>I've been working in fMRI labs for about 3.5 years, and I can say with confidence that any imager with a modest knowledge of standard analysis techniques was able to recognize the faulty reasoning in Vul's criticism. (The extent to which his meta-analysis was sloppily executed did, however, take me quite by surprise when I read Lieberman's rebuttal.)<BR/><BR/>What I find most disappointing is the vigor with which non-imagers (and the media at large) blindly accepted Vul's criticism. It is ironic that this group of unquestioning individuals was in fact victim to the exact sort of naive acceptance of published claims that Vul attempted to attack.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-43799590395987113402009-02-12T13:44:00.000-08:002009-02-12T13:44:00.000-08:00The Vul paper made an important point about inflat...The Vul paper made an important point about inflated p values that can emerge in neuroimaging research. Unfortunately, the tone of the paper, the method by which they misled researchers to participate, the extent to which they misrepresent findings (i.e., they cherry-picked within the papers they presented), and the attack on a narrow, emerging field when the problem they describe has nothing to do with that field, is shoddy scholarship and will undermine an otherwise important point.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-38171420102424963782009-02-03T17:29:00.000-08:002009-02-03T17:29:00.000-08:00To the Anonymous of February 03, 2009 12:28 PM,Nan...To the Anonymous of February 03, 2009 12:28 PM,<BR/><BR/>Nancy Kanwisher is not an author on the Voodoo paper, so your personal attack on her is irrelevant here. <BR/><BR/>The rebuttals to Vul et al. have emphasized that the latter's analytic objections are by no means unique to social neuroscience, so you are not alone in your defensiveness. Vul et al. acknowledged the generality of their critique, albeit not in a prominent way. Furthermore, note that two of the authors are social psychologists, one of whom has published both fMRI and EMG papers on the topic.The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-32882821693420919522009-02-03T12:28:00.000-08:002009-02-03T12:28:00.000-08:00It is interesting to me that Vul et al. imply inco...It is interesting to me that Vul et al. imply incompetence and/or fraudulence in fMRI data analyses when their own review of the literature was so clearly biased - leaving out most of the findings that DON'T support their findings. <BR/><BR/>Kanwisher (Vul's MIT advisor) is well known as a media attention seeking person who appears more interested in making a newsworthy splash than in checking facts. The procedures they are critiquing are almost never used by researchers, and certainly not by most of the researchers they cited (selectively). Incompetence and fraudulence run rampant apparently.<BR/><BR/>Vul et al. also attack a single area of research (i.e., social neuroscience), when in fact all of neuroscience is susceptible to their criticism...if they were accurate. It suggests some a priori reason to dislike and attack the field, not the method.<BR/><BR/>I believe that this is an unfortunate attempts to gain notoriety by appealing to those who already hold negative attitude toward social neuroscience, or neuroscience more broadly, or (sadly) science in general. <BR/><BR/>I implore posters agreeing with Vul et al. to ask themselves if they truly know enough to be agreeing. Have you performed extensive fMRI analyses? Have you reviewed the studies Vul et al. cited? Please do not dismiss some reasearch as "unscientific" while simultaneously failing to engage in critical thinking yourselves. <BR/><BR/>Cheers!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-2602676455093922702009-01-28T01:35:00.000-08:002009-01-28T01:35:00.000-08:00Anonymous LiebermanBerkmanWager - The pointer to y...Anonymous LiebermanBerkmanWager - The pointer to your rebuttal is much appreciated. I did link to it in a <A HREF="http://neurocritic.blogspot.com/2009/01/voodoo-gurus.html" REL="nofollow">new post</A>.<BR/><BR/>Tom - You made a number of important points, thanks for commenting.The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-57349825755833401912009-01-27T13:44:00.000-08:002009-01-27T13:44:00.000-08:00I have been following the debate with interest. I...I have been following the debate with interest. I just read the reply by Lieberman and colleagues at http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(invitedreply).pdf. They point out that NONE of the "red list" authors they were able to contact (which appeared to be most of them) said they conducted their analysis in the way it was portrayed by the Vul paper. This raises concern that Vul and colleagues may have inadvertently misrepresented the full picture and created a "straw man" to attack (probably unintentionally). Since much of the Vul argument is based on self-reported responses to a handful of survey items, it might be premature to "skewer" the entire field of social neuroscience. Has anyone considered the possibility that the wording or presentation of the Vul survey may itself have introduced some bias or error in their data? The survey items about analysis strategy do seem at least a little cryptic and open to interpretation. It would be a shame to prematurely tarnish years of work, professional reputations, and countless millions of dollars of public research funding based on a couple of answers to a few potentially ambiguous and unvalidated survey items. Just a thought.Tomnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-17284742632204008732009-01-27T11:48:00.000-08:002009-01-27T11:48:00.000-08:00http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(...http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(invitedreply).pdfAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-48574082448372423502009-01-13T10:31:00.000-08:002009-01-13T10:31:00.000-08:00Interested parties can read Voodoo Counterpoint fo...Interested parties can read <A HREF="http://neurocritic.blogspot.com/2009/01/voodoo-counterpoint.html" REL="nofollow">Voodoo Counterpoint</A> for an excerpt of that rebuttal by Jabbi, Keysers, Singer, and Stephan (<A HREF="http://www.bcn-nic.nl/replyVul.pdf" REL="nofollow">entire PDF</A>).The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-18194039684682559382009-01-12T07:03:00.000-08:002009-01-12T07:03:00.000-08:00For a reply by some of the criticized authors read...For a reply by some of the criticized authors read http://www.bcn-nic.nl/replyVul.pdfAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-35850590547399814562009-01-11T09:08:00.000-08:002009-01-11T09:08:00.000-08:00Hey Neurocritic. Yes I read the paper closely. It&...Hey Neurocritic. <BR/><BR/>Yes I read the paper closely. It&#39;s quite good and timely, there only two issues I disagree with. Many of the simulations that the authors use to build their argument do not directly compare to the way imagers actually run the analysis (appendix 2 is about signal inflation not the possibility of false positives). More on this in a minute. Secondly, the authors make the very strong claim that results of many of the studies might be false. Once again, the underlying regression analysis is no less sound than doing whole-brain GLMs for contrasting conditions. The same possibility of finding spurious results by chance still applies. Which is true of statistics in general, not just neuroimaging. Hence the convention of p&lt;0.05. <BR/><BR/>Back to the simulations. These are very illuminating, but in my mind the simulation the truly matches that way that imagers do neuroimaging is not in the paper. As I&#39;m certain you and many readers here know, imagers use a cluster extent threshold to ensure that the analysis reveals clusters rather than single voxels. So to put the simulations back in terms of, for instance, the author&#39;s stock exchange example. The problem then is not that the weather station readings will correlate with some measures of the stock exchange by pure chance. Rather they should posit it as what are the odds that the weather station readings correlate with 10 stocks which are ordered consecutively on some list. The authors do address this in point F (page 18) using alphasim (which I also use). And they are correct, most papers have a tendency to chose a threshold that is more liberal than FWE p&lt;0.05. One of the conventions in many studies is p&lt;0.001 and K=10. This isn&#39;t true of all, but it&#39;s true of many and is generally accepted. It&#39;s one way of dealing with the very real fact that in imaging doing true FWE corrections leads to far too conservative of a threshold. The alternative is to bump up the cluster size. But bump up too high and you lose the possibility of detecting many smaller structures (amygdala comes to mind).<BR/><BR/>As for 38% of the papers using the peak voxel. It doesn&#39;t really matter what they use, peak or ROI. It is a cartoon. And in this I agree with the paper. The r-values are inflated. The paper makes this point excellently. <BR/><BR/>The problem is that readers in the blogosphere are taking this to mean that a large portion of the papers using correlations in imaging are reporting nonsense. Which simply isn&#39;t the case. <BR/><BR/>That many of these papers got by based on large r-values. I have no doubt. And so yes this paper is important for demonstrating this problem. But once again, the underlying whole-brain regressions are sound. You have people on some blogs (I followed the links from the authors website) saying things like &quot;all fmri is bogus&quot;. And for the next while there&#39;s going to be a stigma attached to doing whole-brain regressions with every reviewer and their mothers asking about non-independence, whether or not it applies. I suppose this can be construed as a good thing. But we all know reviewing is a bit of a voodoo science itself. All it takes is one reviewer who doesn&#39;t understand the finer points of this argument to kill a paper.<BR/><BR/>Anyhow, in the end I agree with the authors, some form of split half or leave-one-out cross-validation is probably the best way to go if you want to report r-values. But strongly disagree that there is something intrinsically wrong with running a whole-brain regression. It&#39;s only when you try to express this is an r-value that non-independence creeps in.<BR/><BR/>Whew, all that being said. Great blog post as usual!vtwnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-23976437321356110872009-01-11T00:36:00.000-08:002009-01-11T00:36:00.000-08:00To the Anonymous of January 10, 2009 11:52 AM,I'm ...To the Anonymous of January 10, 2009 11:52 AM,<BR/><BR/>I'm sure Vul et al. very much appreciate your interest in their paper. And at least you admit you were being pedantic... ;-)The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-317286925940265582009-01-11T00:26:00.000-08:002009-01-11T00:26:00.000-08:00vtw - thanks for your comments, but did you actual...vtw - thanks for your comments, but did you actually read the paper? Did you look at <B><I>Figure 4: A simulation of a non-independent analysis on pure noise</I></B>? <BR/><BR/>In addition, Vul et al. noted that:<BR/><B><I>38% of our respondents who reported the correlation of the peak voxel (the voxel with the highest observed correlation) rather than the average of all voxels in a cluster passing some threshold.</I></B><BR/><BR/>And why do you say that bloggers are conducting a witch hunt? The manuscript has been accepted for publication in <I><A HREF="http://www3.interscience.wiley.com/journal/118509128/home" REL="nofollow">Perspectives on Psychological Science</A></I>, so of course reviewers will see it.The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-50208651999267930102009-01-10T11:52:00.000-08:002009-01-10T11:52:00.000-08:00Thanks for your reply to the concerns I raised. I ...Thanks for your reply to the concerns I raised. I love the paper but the psychometrician in me remains (perhaps pedantically) hung up on your statement in your paper (p. 2-3) that<BR/>"Thus, the reliabilities of two measures provide an upper bound on the possible correlation that can be observed between the two measures (Nunnally, 1970)."<BR/>As I noted earlier - reliabilities refer to scores (not measures), correlations are observed between scores on measures (not measures), and score reliability does not provide an upper bound on possible correlations.<BR/>As I said - perhaps pedantic but I could not help hearing the disapproving voice of my old psychometrics professor as I read that sentence. Congratulations on a wonderful paper nonetheless!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-56898366025674683712009-01-10T11:44:00.000-08:002009-01-10T11:44:00.000-08:00It's unfortunate that the blog world is getting al...It's unfortunate that the blog world is getting all excited by this. This is hardly a new issue, nor one endemic to "social neuroscience". Non-independence is certainly an issue if you take the correlation results as being more than they are (and I'm sure more than a few authors are happy to have you think that). <BR/><BR/>However, the underlying analysis is sound. In a nutshell what most of the studies criticized in this paper have done is this. They've run a whole-brain regression of brain activity against a psychological variable. They then take the resulting map and apply a statistical correction for multiple comparisons (using whatever method they prefer. FDR, RFT, or a value based on monte-carlo simulations etc.) they usually apply a cluster extent threshold such that for clusters must be composed of 10 contiguous voxels to be considered significant (this helps get rid of spurious findings from single-voxels which have a stronger likelihood of being due to chance). <I> So far this is identical to the contrasting of conditions that is the standard analysis in nearly all of published fmri studies. </I> <BR/><BR/>What is making everyone all flustered is that the researchers then use regions of interest based on this whole-brain regression map to create scatterplots of signal change VS a psychological variable. Yes, these scatterplots will necessarily be significant since the previous analysis said they were. But if you wanted to be a convinced of a correlation, what would you rather see? A cluster and a t-value, or a scatterplot so that you can see yourself whether the correlation is sound or being driver by outliers? <BR/><BR/>Perhaps if authors wrote &quot;for illustrative purposes we also show a scatterplot of the observed correlation&quot; people would relax a little. But is that really necessary? Most imagers understand that this is the value of the scatterplots. <BR/><BR/>This issue also comes up with traditional contrast methods. For example, conditions A and B are contrasted. Regions show up as more active in A&gt;B. Then signal change is extracted from ROIs based on the peak voxels showing maximum difference between A and B and those signal change values are plotted. The resulting bar graphs are also suffer from non-independence, however they are informative as they tell you the direction of the effect relative to baseline. Now not everyone buys the idea of a resting baseline consisting of a fixation cross, but that&#39;s a separate argument.<BR/><BR/>Finally, let&#39;s not get too carried away with functional localizers. They&#39;re only as good as the localizer you use. And they can lead one to ignore potentially interesting effects that occurred outside of what these localizers activated.<BR/><BR/>At the moment this paper has generated a witch hunt among bloggers. If this reaches reviewers it&#39;s going to be a pain for years to come. Perhaps conventions need to be changed such that people are more explicit in noting that the scatterplots are not-independent. But non-independence doesn&#39;t invalidate the original regression analysis.vtwnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-62211523070181175522009-01-09T22:47:00.000-08:002009-01-09T22:47:00.000-08:00Thanks to the latest Anonymous Commenter for point...Thanks to the latest Anonymous Commenter for pointing out Ed Vul's more proficient answers to the questions from the January 08, 2009 10:28 AM <A HREF="http://neurocritic.blogspot.com/2009/01/voodoo-correlations-in-social.html#c4244579253538004996" REL="nofollow">Anonymous</A>.<BR/><BR/><A HREF="http://www.edvul.com/voodoocorr.php" REL="nofollow"><B>Supplementary Q and A</B></A><BR/>Since some interesting questions have been raised about our paper (in these blogs and otherwise), we'll do our best to address them here.<BR/><BR/><B>Q: Since reliabilities apply to scores on measures rather than the measures themselves, how can you use reliabilities from other samples to make inferences about the scores used in the particular studies you describe?</B><BR/>A: Like nearly all social scientists, we assume that the reliability of a measure estimated from scores obtained on one sample will generalize to other samples. It is true that these measures of reliability will vary from sample to sample, but this is true of any measure ever obtained from a sample population. We hope (and we assume the authors of the articles do too) that the participants sampled into the reported studies were representative (with respect to the inferences in question). If they are, then we have no reason to suspect that the reliability of scores on any of the measures in these samples will differ substantially from those of other samples that have been previously used to evaluate these measures.<BR/><BR/><B>Q: Does reliability put an absolute constraint on the correlation that may be obtained?</B><BR/>A: No, reliability puts a constraint on the expected value of the correlation. Noise may make the correlation higher sometimes, and lower at other times. We argue that these articles have selected favorable noise that increases the apparent correlation, thus causing these estimates to systematically exceed the maximum possible expected value of a correlation between these measures.<BR/>We should reiterate again that we think that the theoretical upper bound on the expected correlation to be much higher than what should reasonable be expected. The upper bound assumes a perfect underlying correlation.The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-63676457292731107092009-01-09T21:32:00.000-08:002009-01-09T21:32:00.000-08:00Anonymous who raises concerns about the definition...Anonymous who raises concerns about the definition of reliability etc.: it seems that Edward Vul has actually posted responses to your queries here:<BR/><BR/>http://www.edvul.com/voodoocorr.phpAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-13638808312027098212009-01-08T11:28:00.000-08:002009-01-08T11:28:00.000-08:00Thanks for commenting.First, I'm aware that the sc...Thanks for commenting.<BR/><BR/>First, I'm aware that the scores or hemodynamic measures in question are taken on the same day, and that test-retest reliability applies to scores or measures taken at different times. I'm sure Vul et al. know this as well. However, the personality/emotionality scores and the hemodynamic measures are typically not obtained at the same time. <BR/><BR/>Furthermore, even if one accepts that the observed correlation can be greater than .74, that doesn't invalidate the fact that 54% of the surveyed papers used non-independent analysis methods, which inflated the correlation values.<BR/><BR/>Although your comment will have a reasonably sized audience here, you might want address the authors directy in a letter to the editor of <I>Perspectives on Psychological Science</I>...The Neurocritichttps://www.blogger.com/profile/08010555869208208621noreply@blogger.comtag:blogger.com,1999:blog-21605329.post-42445792535380049962009-01-08T10:28:00.000-08:002009-01-08T10:28:00.000-08:00There are two major problems with your (and Vul et...There are two major problems with your (and Vul et al's) discussion of reliability and the impact on correlations.<BR/>First, reliability does not refer to measures but to scores on measures. Thus, you cannot say that a measure has a reliability of X. Rather scores on the measure (i.e., in a particular setting) have a reliability of X. It is thus not very meaningful to apply reliabilities observed in other contexts to a new situation since the reliability of scores on the measure could be significantly higher in a new situation. <BR/>Second (and more importantly), observed unreliability does not place an absolute constraint on observed correlations - which is why correlations corrected for unreliability are occasionally greater than 1. This (as has been pointed out by the likes of Murphy, 2003) is typically due to underestimates of reliability. Cronbach’s alpha is, for example, typically a lower bound on the reliability of a set of items. It is thus quite possible to observe a correlation that is larger than the square root of the product of two reliability estimates. This is contrary to your statement that the correlation between (scores on) two measures with reliabilities of .8 and .7 cannot be greater than .74.<BR/>I am surprised that a journal as good as Perspectives has allowed these rather elementary errors to slip through.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-21605329.post-57506643127963450512009-01-08T02:19:00.000-08:002009-01-08T02:19:00.000-08:00How often do the statistics actually back the conc...How often do the statistics actually back the conclusions in the research I read?<BR/>Very seldom.<BR/>Stat. analysis is not being taught well, I would guess. <BR/>I agree with Aldebrn here- this is one of many coming.<BR/>(I see a problem in science in general- the idea is to make a hypothesis and then find evidence to support it. I see this in biology and astrophysics and neuroscience especially. Good science is an attempt to disprove the current theory. My two bits...)<BR/>SonicAnonymousnoreply@blogger.com