It’s true that funnel plots only go so far, but they’re fairly unproblematic for most part in practice, especially since you can generally take a plot that ‘fits’ as indicative of non-biasing. Since study population heterogeneity and design heterogeneity are primary competing reasons for off-kilter plots, it would also make sense in this case that those could be ruled out, since they were carried out by the same investigator.

It’s hard to blame design heterogeneity and population heterogeneity if the same person at the same institution is taking subjects from the same source population.

david: I suspect that was Dr. N’s intent. I wouldn’t know how you would get a numeric value for “study quality” after all.

“You can make a graph of all published studies with the quality of the study on the vertical axis and the result (positive or negative) along the horizontal axis.”

Why would this form a funnel shape? I can’t see any reason why there should be a relationship between variance in effect size and study quality. Don’t you mean ‘study size’ on the vertical axis?

]]>By: etatrohttp://theness.com/neurologicablog/index.php/perception-and-publication-bias/#comment-41358
Mon, 02 Apr 2012 21:27:20 +0000http://theness.com/neurologicablog/?p=4336#comment-41358As Steve is undoubtedly aware, the publication bias (toward publishing positive results) comes from journal policies coupled with funding and career advancement priorities on the part of funding agencies (i.e. NIH) and universities. In order to get a grant, a researcher has to publish. In order to get a position or a promotion, or keep one’s job, a researcher has to publish AND have grants. They refer to this metric as “productivity.” Journals only tend to publish positive results. The only cases where they will publish negative results are if the study/project refutes a previously established claim of a positive result. In this case, typically, the bar is set higher as far as validity, significance, methodological soundness because the results are going against a previously established set of facts. The study needs to also provide some reason the other study would get positive results. If you look in Steve’s posts — you’ll find the perfect example in the XMRV virus and Chronic Fatigue Syndrome.

I’m not sure how to address this problem. Indeed — we do need to have some metric of productivity for researchers, and publishing results is one of them. However — if a researcher spends 100K and 12 months of work testing a hypothesis that turns out to be wrong — should he or she not get future funding or a promotion? The temptation is to keep beating the data to a pulp until some publication can be eked out of questionable statistical-wrangling while playing down the 12 months of what would be perceived as failure.

I have actually witnessed older, established, scientists (with boatloads of funding) beating a dead horse of disproven hypotheses, flawed animal models, ambiguous or negative results, — publishing questionable data; because their reputations depend on being right all the time and “productive.” Younger scientists need to attach their names to the older in order to seem like they are part of a “productive” team, in order to promote their careers and to secure funding.

In my opinion, the problem of publication bias has 3 parts: 1. The “publish or perish” culture in academic research, 2. The capitalistic means of funding science in the US, and 3. The hierarchical culture of academia (new investigators, new ideas don’t get fair play). These problems are interconnected and each affects the other.

]]>By: cwfonghttp://theness.com/neurologicablog/index.php/perception-and-publication-bias/#comment-41314
Sun, 01 Apr 2012 00:32:44 +0000http://theness.com/neurologicablog/?p=4336#comment-41314Here are some people putting my felt objections much better than I was able to:

http://www.cochrane-net.org/openlearning/html/mod15-3.htm
Publication Bias Interpreting funnel plots
*From these examples, we can see that a funnel plot is not a very reliable method of investigating publication bias, although it does give us some idea of whether our study results are scattered symmetrically around a central, more precise effect. Funnel plot asymmetry may be due to publication bias, but it may also result from clinical heterogeneity between studies (for example different control event rates) or methodological heterogeneity between studies (for example failure to conceal allocation).*

http://en.wikipedia.org/wiki/Likelihood_function
*Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous real-world consequences in medicine, engineering or jurisprudence. See prosecutor’s fallacy for an example of this.*

http://lesswrong.com/lw/1ib/parapsychology_the_control_group_for_science/
*Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored – that they are unfairly being held to higher standards than everyone else. I’m willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.
— Eliezer Yudkowsky, Frequentist Statistics are Frequently Subjective*

Further, from my dictionary: “Bias is a predisposition either for or against something; one can have a bias against police officers or a bias for French food and wines”
So in research and publication of results, bias should be determined relative to the researcher’s purposes. If you have a new proposition in your field and want to demonstrate that your ideas have value, are you biased? And if so, is such bias necessarily inappropriate? Hardly.

]]>By: cwfonghttp://theness.com/neurologicablog/index.php/perception-and-publication-bias/#comment-41309
Sat, 31 Mar 2012 17:34:45 +0000http://theness.com/neurologicablog/?p=4336#comment-41309Hey nybgrus, discuss this with your friend:http://www.apperceptual.com/baldwin-editorial.html
]]>By: cwfonghttp://theness.com/neurologicablog/index.php/perception-and-publication-bias/#comment-41308
Sat, 31 Mar 2012 17:26:55 +0000http://theness.com/neurologicablog/?p=4336#comment-41308Even BillyJoe7 seems to be more positive here than nybgrus. You know, the one who always discusses these things with an additionally anonymous “friend.” The friend always loses the argument, of course.
Something then has come from nothing, which even BillyJoe7 no longer believes.
Yet when asked to make some substantive response to the same question publicly, nybgrus can only state that he’s already done that with a friend and his authority to now declare his adversaries wrong has been established, and he doesn’t have to show no stinking badge.
Also wasn’t he the expert on biological evolution who believed that the professionals such as Margulis and Shapiro didn’t know near as much about those stinking bacteria as he did. A priori data was on his side. He proved that when he argued with a friend. A posteriori probability be damned.
]]>By: daedalus2uhttp://theness.com/neurologicablog/index.php/perception-and-publication-bias/#comment-41307
Sat, 31 Mar 2012 14:49:31 +0000http://theness.com/neurologicablog/?p=4336#comment-41307Billy-Joe, if the data is correct, that is pretty much all that can be expected. What we need to ensure is that there is enough data to show how good the data is, and whether the conclusions follow from it and whether the conclusions are generalizable.

For example in the thread on ECT, I just realized that their entire conclusion is wrong, but because they put their data there it can be understood that it is wrong.

We spend a lot of time finding errors in research, and the same sorts of errors come up time and again. This effectively makes the research useless or, at least, less than useful. Shouldn’t we be coming up with ways to prevent this tragic waste of time and effort?

This could be done by educating researchers and limiting research funds to those who pass muster in knowledge about how to conduct a methodologically sound and unbiased clinical trial. The file drawer effect can be prevented by registering trials and disallowing publication of unregistered trials.

Now that the factors behind faulty trial design are well known, isn’t it time to take preventative action?

]]>By: nybgrushttp://theness.com/neurologicablog/index.php/perception-and-publication-bias/#comment-41278
Fri, 30 Mar 2012 09:48:23 +0000http://theness.com/neurologicablog/?p=4336#comment-41278@cwfong: Nope. I am giving you exactly as substantive a response as you deserve based on your comments.

Educate yourself first, at least in the basic premise of intelligent discourse, and then we can continue.

Though your use of the term “evolutionist” belies the fact that that will almost certainly never happen.