Meta-meta-analysis concludes that meta-analyses can be iffy

Many meta-analyses suffer from inaccurate conclusions based on asymmetry tests …

Most of the research that we report here at NI takes the form of peer-reviewed original research. Sometimes though, the studiesreportedonare what are known as meta-analyses. A meta-analysis is a paper that has searched the existing literature, usually clinical trials, and pooled together a number of different studies on the same topic in an effort to obtain a much larger sample size, and therefore a clearer picture.

As anyone who has spent any time at all in science will tell you, meta-analyses can be very problematic. For one, you have to base your meta-analysis on studies over which you had no control, and which will have been conducted differently. Secondly, it's not unheard of for authors to leave the occasional nugget of data out of their papers.

Now, a new paper that will be published tomorrow in the Canadian Medical Association Journal* has looked at a vast number of meta-analyses and examined whether or not the studies had used valid statistical tests to reach their conclusions. The data, 6873 different studies, were collected from the Cochrane Database of Systematic Reviews, an collection of evidence-based medical research maintained by a non-profit organization. At question was the appropriateness of asymmetry testing, and so a meta-meta-analysis of sorts was carried out, and the results are not surprising. The paper examined inverted funnel plots: graphs that show the relation between study effect size and precision, and looked for statistical asymmetry in these plots. They found that many meta-studies erroneously come to conclusions that their statistics ought not to support.

I must admit, after reading the paper and writing this piece, I feel like I'm trapped in the medical research equivalent of a mobius loop, but the take-home message is to be conscious of the fact that a meta-analysis is never going to be as good as an actual clinical trial.