Using evidence syntheses to analyse and improve the quality of research

Yesterday, I gave a presentation at the What Works Global Summit (WWGS), in a session titled ‘Improving study design’. The WWGS is the ‘Mecca’ of systematic reviewers: by some counts, there were about 900 of them at the event (but that includes a lot of ‘consumers’ of systematic reviews such as policy-makers, service providers, etc.). I had to be there myself. The take-home message of my presentation was the following: although systematic reviews are primarily used to answer substantive questions about the mean effect of medical treatments or social interventions, they can also be used to understand and improve methodological decisions such as sampling, allocation to conditions, outcome measurement and reporting.

The systematic study of research decisions – known as ‘meta-research’ or ‘research on research’ is a relatively new discipline, but it has already made very significant contributions:

It has demonstrated the highly inconsistent quality of research (John Ioannidis);

It has identified a number of factors increasing the risk of substandard research (e.g. financial conflicts of interests);

It has triggered a discussion on how to improve the credibility of science (e.g. through reporting guidelines and financial disclosures).

I also advocated for the publication of all data and intermediate files used in systematic reviews. Interestingly, very few of them observe the principles of ‘open science’. Publishing all files would have many advantages: (i) it would make meta-research easier (and cheaper!), (ii) it would help systematic reviewers make the most of their data and perhaps get extra citations; and last but not least (iii) it would make systematic reviews more credible. According to John Ioannidis, “We have an epidemic of deeply flawed meta-analyses”. Requiring the publications of all files and data would probably create a strong incentive for systematic reviewers to be more… systematic.