The review concluded that there was a distinct lack of evidence to support the benefits or cost-effectiveness of drug interaction software. Potential for bias and a limited poor-quality evidence base make the authors’ cautious conclusions likely to be reliable.

Authors' objectives

To assess the effectiveness and cost effectiveness of drug interaction detection software in the prevention of adverse drug effects.

Searching

MEDLINE, EMBASE, CINAHL, IPA and HealthStar were searched for studies published in English from January 1966 to June 2006; search terms were reported. Reference lists of retrieved papers were searched.

Study selection

Randomised or non-randomised studies that evaluated drug interaction software either as a stand-alone product or as part of a larger clinical decision support system that measured outcomes specific to the software were eligible for inclusion. The setting could be in either hospital or community. Outcomes of interest were morbidity or mortality and surrogate outcomes (such as number of potential adverse drug interaction events prevented, adverse drug interaction event rate and the number of inappropriate medications prescribed). Studies were excluded if they did not provide data that provided analysis of drug interaction checking versus control or were published before 1990.

Most of the included studies were conducted in hospital settings; one was conducted in a primary care setting. There was a total of 80,471 patient days/visits. The software was often a component of larger decision support systems rather than an exclusive drug interaction detection software. Control was either no drug interaction software or a handwritten prescription. Studies were undertaken in Canada, Israel and USA.

Two reviewers independently extracted data for the review. Disagreements were resolved by consensus or consultation with a third reviewer.

Assessment of study quality

The studies were assessed for quality with criteria that included level and method of randomisation, intention-to-treat analysis, whether reasons were given for withdrawals, adequacy of sample size and outcome assessment details.

Two reviewers independently assessed studies for quality. Disagreements resolved by consensus or consultation with a third reviewer.

Data extraction

Data were extracted on drug interaction rates and standardised as the drug interaction rate per 1,000 patient days. Relative risks (RRs) and their 95% confidence intervals (CIs) were calculated.

Two reviewers independently extracted data for the review. Disagreements were resolved by consensus or consultation with a third reviewer.

Methods of synthesis

Studies were pooled in Bayesian meta-analysis using a random-effects model based on a Poisson regression. Summary effect relative risk and 95% CIs were estimated. Heterogeneity was assessed with I2. Sensitivity analyses were undertaken to further explore heterogeneity.

Results of the review

Four studies (80,471 patient days/visits) were included in the review. One study was a cluster randomised controlled trial, one a prospective cohort study and two were before and after case series. Study duration ranged from five to 12 months (median six months).

There was no evidence of a difference in the rate of drug interaction events between intervention and control (RR 0.66, 95% CI 0.33 to 1.18; four studies, I2=52%). Sensitivity analyses did not markedly reduce heterogeneity.

Cost information

There were no data to enable assessment of cost effectiveness.

Authors' conclusions

Available evidence did not support a benefit for drug interaction detection software systems or support any policy to widely disseminate their use.

CRD commentary

The review addressed a clear research question supported by appropriate broad inclusion criteria. A range of relevant sources was searched to identify relevant studies published in English; there were no specific attempts to identify unpublished studies or those in other languages, so language and publication biases could not be ruled out. Appropriate methods were used to select studies, evaluate them for quality and extract data, which minimised the chance of reviewer error or bias. Only four studies met the inclusion criteria and only one of these was randomised. Studies were generally of poor quality, unblinded and with non-uniform assessment of outcomes. The experimental intervention in the studies was generally a component of larger decision support software systems rather than an exclusive drug interaction software intervention. The included studies measured only one relevant outcome and this was standardised to allow comparisons between studies. Synthesis of these studies may not have been appropriate given the different study designs. Assessment of heterogeneity was appropriate. Moderate heterogeneity was identified and was not markedly changed by sensitivity analyses.

Potential for bias and a limited poor-quality evidence base make the authors’ cautious conclusions likely to be reliable.

Implications of the review for practice and research

Practice: The authors stated that large investments to support widespread implementation of drug interaction software in clinical practice were premature.

Research: The authors stated a need for high-quality studies to explore drug interactions specifically and the impact of software to detect and alert clinicians at the point of prescribing.

This is a critical abstract of a systematic review that meets the criteria for inclusion on DARE. Each critical abstract contains a brief summary of the review methods, results and conclusions followed by a detailed critical assessment on the reliability of the review and the conclusions drawn.