The science and practice of continuing medical education: a study in dissonance

The editorial in the January/February 1993 issue of ACP Journal Club dealt with the evolving role of clinical practice guidelines in narrowing the gap
between research evidence and clinical practice (1). An article abstracted in this issue ([2]; see “Continuing Medical Education: A Review” [Abstract]) reviews randomized controlled
trials of continuing education, a key link in the dissemination of clinical practice
information from medical research. This review is as interesting for what has not
been studied as for what has: Most of what physicians say they do to keep up to date
has either not been tested or has been tested and found wanting.

Two common methods used to determine how practicing physicians continue to learn are
mailed questionnaire surveys and structured interviews. In a recent example of the
former, Mann and Chaytor (3) mailed a pretested questionnaire to all physicians in a geographically designated
area. Respondents indicated 3 resources they use to meet ongoing learning needs: journal
subscriptions (61%), consultation with colleagues (56%), and journals received in
the course of membership association (51%). Lectures were also rated as highly effective
by 84% of primary care physicians and 78% of specialists. In contrast, only 42% of
family physicians and 63% of specialists owned computers and over two thirds rated
their own computer skills as low.

Using trained interviewers and a rigorously pretested interview format, Fox and colleagues
(4) asked over 300 North American physicians what clinical changes they had made in
their practices and what learning resources they used to make these changes. Physicians
stated that they learned mostly from discussion with colleagues, reading, rounds,
and conferences.

It is disturbing to compare physicians' self-reported educational preferences and
activities with the evidence from randomized trials of continuing medical education
(CME) modalities (2). No controlled studies were done of journal reading, but mailed materials, such
as newsletters and CME correspondence courses, were found to be ineffective. Does
this mean that physicians do not learn or change their performance as a result of
their most popular CME activity, reading journals? This question has not been studied
directly and quantitatively, but there is some indirect, qualitative evidence. In
Fox and colleagues' study, physicians stated that they participated in formal and
informal CME activities to validate their own practices against existing standards.
Yet in one of the first randomized trials of CME (5), participants showed little change in performance after a CME program for topics
they chose themselves. Participants, however, did improve their knowledge and performance
for topics that were assigned to them from among topics they did not choose. Thus, individual clinicians may not be the best judge of what they need to
learn, and CME activities that clinicians choose themselves—perhaps including selective
reading of journal articles—may not have much effect if they address topics that the
clinician already handles well.

Evidence for the effectiveness of consultation with peers or specialists, the second
most frequently used CME resource, is also not found directly in the review (2). Closely allied to the process of consultation, however, is the role of the “educational
influential” (4) or “opinion leader” (5). These individual practitioners are regarded by their colleagues as reliable sources
of new information. Randomized trials show that engaging educational influentials
in a tailored CME program leads to the message being spread to the influential's colleagues,
with improvements in their performance (6) and in the health outcomes of their patients (7).

In contrast to the 2 most popular learning resources used by physicians, a wealth
of evidence exists about formal CME programs such as clinical rounds, conferences,
or workshops. It is difficult to show any effect on performance from didactic CME
presentations. Rather, effective programs appear to possess one or more of the following
elements: a precourse assessment of needs by auditing the participant's practice or
testing his or her knowledge; provision of an opportunity for participants to practice
or rehearse new techniques or skills; and provision of activities after the course
that check and facilitate change in the clinician's office or practice setting.

Physicians' negative attitudes toward computers in the study of Mann and Chaytor (3) are at odds with some findings from the CME review. The computer was shown to be
an effective CME tool, especially when used to generate reminders about preventive
care or follow-up. Contrasting the effectiveness of some computer decision support
systems with the discomfort that many clinicians have with computers suggests that
practising physicians should seek opportunities to learn what computers can—and cannot—do
to assist them in patient care. This learning, of course, should take place through
what we know to be effective CME modalities such as hands-on preceptorships.

The dissonance is great between the CME activities that physicians prefer for keeping
up to date and those that have been tested and shown to work. The consequences are
documented by the numerous quality-of-care studies showing that clinicians are slow
to pick up new, validated practices (such as prophylaxis for deep venous thrombosis
for high-risk patients [1]) and are slow to discard practices that have been shown
to be ineffective (such as the current widespread prescribing of calcium antagonists
after myocardial infarction despite evidence that they are harmful for some patients
[8]). We need to understand the reasons for this important dissonance so that the
considerable time clinicians devote to CME can be better spent. In the meantime, physicians
who want to improve their performance through formal CME should select courses that
begin with a needs assessment, provide performance rehearsal, and facilitate practice
changes. They should also explore the use of office computers to provide reminders
or feedback about needed preventive and follow-up care.