The authors searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, and Scopus through February 2009 for studies describing virtual patients for practicing and student physicians, nurses, and other health professionals. Reviewers, working in duplicate, abstracted information on instructional design and outcomes. Effect sizes were pooled using a random-effects model.

Results

Four qualitative, 18 no-intervention controlled, 21 noncomputer instruction-comparative, and 11 computer-assisted instruction-comparative studies were found. Heterogeneity was large (I2>50%) in most analyses. Compared with no intervention, the pooled effect size (95% confidence interval; number of studies) was 0.94 (0.69 to 1.19; N=11) for knowledge outcomes, 0.80 (0.52 to 1.08; N=5) for clinical reasoning, and 0.90 (0.61 to 1.19; N=9) for other skills. Compared with noncomputer instruction, pooled effect size (positive numbers favoring virtual patients) was −0.17 (−0.57 to 0.24; N=8) for satisfaction, 0.06 (−0.14 to 0.25; N=5) for knowledge, −0.004 (−0.30 to 0.29; N=10) for reasoning, and 0.10 (−0.21 to 0.42; N=11) for other skills. Comparisons of different virtual patient designs suggest that repetition until demonstration of mastery, advance organizers, enhanced feedback, and explicitly contrasting cases can improve learning outcomes.

Conclusions

Virtual patients are associated with large positive effects compared with no intervention. Effects in comparison with noncomputer instruction are on average small. Further research clarifying how to effectively implement virtual patients is needed.