"...instead of [picking] through the research literature, consciously or unconsciously finding papers here and there that support [our] pre-existing beliefs, [we] take a scientific, systematic approach to the very process of looking for scientific evidence, ensuring [our] evidence is as complete and representative as possible of all the research that has ever been done..." — Goldacre, 2012

A systematic review is a scientific investigation in and of itself; it has pre-planned methods and an assemblage of original studies as its ‘subject’. Results of multiple primary papers are thus synthesized in the SR, and strategies are used to limit bias and random error. These strategies include a comprehensive search of all potentially relevant articles and the use of explicit, reproducible criteria in the selection of studies for review. Primary research designs and study characteristics are appraised, data are synthesized, and results are interpreted. According to the Dictionary of epidemiology (by Last), 4e (2001), a systematic review is "...the application of strategies that limit bias in the assembly, critical appraisal, and synthesis of all relevant studies on a specific topic. Meta-analysis may be, but is not necessarily, used as part of this process..." More specifically, the Health Technology Assessment (HTA) editorial board says that "...a systematic review is a synthesis that collates all empirical evidence fitting pre-specified eligibility criteria in order to answer a specific research question..."

What is a systematic review?

A systematic review is an investigation of a clearly-formulated question that uses methodical and explicit steps to identify, select, and critically appraise relevant research, and to collect and analyze data from any appropriate studies that may be found. The process includes a series of searches for papers or studies in reputable biomedical databases (such as PubMed or Embase for medical questions); a comparison of important features of each study against a list of inclusion and/or exclusion criteria (features include study design, subject or medical condition attributes at baseline, details of the exposure or intervention, time factors, etc.); and a detailed critical appraisal of each study for risk of bias and potential confounding factors. Statistical methods (meta-analysis) may or may not be applied to analyze and summarize the results of these included studies. In published reports, systematic reviews explicitly describe the database(s) and key words that were used in the search; the last date databases were searched; the study inclusion/exclusion criteria; the process for determining inclusion/exclusion, risk of bias/confounding, and data extraction; a list of apparently-relevant studies that were excluded, and the reason for each exclusion; key details of each of the included studies, if any; and a summary of the findings.

Systematic reviews have been described as syntheses and "...papers that summarize other papers" and "overviews of primary studies that have used explicit and reproducible methods". In the systematic review, the systematic approach to reviewing the literature is more robust and powerful than in traditional literature reviews, which may be may prone to biases of various kinds (e.g., language, publication, geographic). Further, a systematic review strives to exhaustively search all the relevant peer-reviewed literature as well as grey literature and unpublished research findings. The process by which studies are included or excluded is entirely transparent, and is therefore repeatable for future updating.

Cook et al describe SRs as "...scientific investigations in themselves, with pre-planned methods and an assembly of original studies as their “subjects.”" SRs synthesize findings from key, high-powered trials and reports of therapies and interventions using explicit inclusion and exclusion criteria, and may or may not include a meta-analysis. As an approach to gathering, analyzing and synthesizing a body of research, SRs are very popular. SRs include clearly-defined protocols and procedures that ensure accountability and transparency and are typically collaborative in nature. Research teams work in conjunction with a group of professionals, experts and practitioners in the field to ensure that all key resources are located and evaluated. SRs are comprehensive in the way they capture relevant literature yet they often very specific. A set of criteria clearly defines which studies are to be included or excluded in the review - called "inclusion - exclusion" criteria. In the final analysis and synthesis, SRs are evaluated based on methodological rigour and a meta-analysis is conducted when possible.

Why do we do systematic reviews?

SRs function as the only explicit, transparent, systematic method to bring together a large body of evidence to address a focused question. The issue is to unify the evidence on a question and to do so systematically and openly so that what you did is reproducible and readers can then judge if your review is credible or not. In collating the body of evidence, you can then apply the GRADE methods which is a systematic and explicit approach to making judgements about the quality or certainty (or confidence) in the underlying estimates of effect e.g. we make the judgement based on an assessment of the risk of bias for included studies (randomization, allocation concealment, blinding, selective reporting, severe baseline imbalance, early stopping for benefit etc.), precision of estimate, consistency of estimates, directness (applicability of the evidence to your patient or population) and publication bias.

To determine whether an intervention has any ‘worth’.

To facilitate a well informed decision.

To assess the quality and synthesize evidence.

To put together published and unpublished evidence.

To avoid harms (Cochrane´s logo is a good example).

To promote beneficial changes in the clinical practice (If the SR has a strong and quality evidence)

To develop a meta analysis. (Maybe you have several small trials and you could combine their results)

To collect the adverse effects of an intervention.

To compare the effect of several interventions (a broad systematic review).

To develop clinical practice guidelines and clinical pathways.

To avoid unfair heterogeneity of the clinical practice.

To quantify, quite tightly, how good the intervention is

To understand the adverse events associated with the intervention.

To see what has been done before, to see if new research is needed.

To determine the risk of bias / threat to effect size estimation by examining the challenging issue of unpublished data (seegrey literature).

To help the pharmaceutical industry demonstrate the value of new products; to judge the true ‘worth’ of new drugs and devices compared to what is currently available

Regulatory requirements are such, particularly in the US, that is it not difficult to construct an RCT that will show a superior benefit:risk outcome. Whether or not real, or compared to best alternatives, is fraught with uncertainties. EBHC should rest on published evidence, and be subject to removing bias.

The underlying principle of EBM is rigorous collation and interpretation of evidence; physicians will have no alternative but to continue to use their own clinical judgment and personal experience.