Become a Member

Membership is available to any healthcare provider, educator, or clerkship administrative staff person interested in or participating in the coordination of medical student education. Learn how to become a member today!

Member Spotlight

COMSEP would like to congratulate the 2017 Awardees

MacKenzi Hillard, MD
Weill Cornell University School of MedicineTechnology at the Bedside; Using Point-of-care Tools to Enhance Clinical Reasoning in Pediatric Settings

Cindy Osman MD, MS
New York University School of MedicineThe Student Workplace-Based Easy Evaluation Tool (SWEETool): Feasibility and Validity

Journal Club

Finally! There may be some value in self-­‐assessment after all! The readiness for clerkship survey: Can self-assessment data be used to evaluate program effectiveness. Peterson at al. Academic Medicine 2012;87(10):1355-60

Reviewed by Patricia Kavanagh

What was the study question?
Can aggregated self-assessment data of clerkship readiness provide meaningful
information to evaluate the effectiveness of an educational program?

How was the study done?
The authors created a 39-item Readiness for Clerkship survey using several key
competence documents and expert review. Two cohorts of third year students at the
University of British Columbia and clinical preceptors were surveyed; students rated
their own performance and faculty rated the competence of the group as a whole.

What were the results?
Data were collected from two classes of students. One group completed the survey
six months after having started clerkship (68% response) and the other group
completed the survey four months after starting clerkship (64% response). Twenty-
six percent and 39% of faculty completed the survey in the same time periods. Results
showed that while an individual’s opinions are not a reliable indication of which
competencies were enabled by an educational program, when these ratings were
pooled, reliable results are obtained. The correlation between aggregated student
ratings and aggregated faculty ratings were r=0.88 and r=0.91. A small sample size is
needed to achieve this interrater reliability scrore – 9-21 students and 26-45 faculty.

What are the implications of these findings?
Although they overestimated their competence, students’ aggregated scores could
be used to identify the strengths and weaknesses of an educational program. The
survey tool – or one modified to suit the objectives of your own institution’s
curriculum objectives – could be used to assess how well curricula enable students to
achieve the necessary competencies.

Editor’s note: Wow…a use for self-assessment! Incredible. Kevin Eva (one of the study authors who routinely speaks of the perils of self-assessment) must be intrigued. (Dr. Eva will be the keynote speaker at our combined COMSEP/APPD meeting in Nashville.) This fascinating study offers a great way to add another dimension to program evaluation (SLB).