Permission to reprint the abstract has not been received from the publisher.

A review of the literature suggests that choice of control group may have affected the results and policy implications of the major evaluations of governmental training programs. It is argued that the usual evaluation designs underadjust for preprogram differences between trainees and the control group and thus yield biased estimates of program impact. Attempts to statistically correct for such bias are presented and discussed.