Wolters Kluwer Health
may email you for journal alerts and information, but is committed
to maintaining your privacy and will not share your personal information without
your express consent. For more information, please refer to our Privacy Policy.

In Reply to White:

We appreciate Dr. White’s comments and acknowledge the complexity of this discussion. Importantly, our study revealed the extreme variation and imprecision of medical school grades (and grading terminology) throughout the United States. The creation of valid and reliable assessment tools used by all medical schools may indeed be difficult, but we disagree that it is unachievable. For example, the clerkship subject matter examinations of the National Board of Medical Examiners have been shown to be both valid and reliable and are widely used throughout U.S. medical schools. Nonetheless, few would argue that a single multiple-choice test should serve as the only metric determining a clerkship grade. Multiple, varied, and synergistic assessment tools should be employed, uniquely designed to evaluate a medical school’s students and curriculum.1 We believe such tools are actively being used throughout U.S. medical schools presently, and should continue.

Our findings have opened up a debate regarding the standardization of clerkship grades. The author questions if the time and resources required to achieve such standardization would be justified. His comparison to military 4-star generals is an interesting one, though it is worth reflecting that the success of those generals should not be attributed only to their West Point undergraduate educations. Subsequent training, maturation, and experience arguably played an equally meaningful role in promotion to General. In that vein, our data do not convey how and if medical school evaluations predict future success. In contrast, they show a system with extreme variation and imprecision, leading to a lack of grade-meaning and transparency. Arguably, this has negative downstream consequences.

Durning and Hemmer provide a valuable commentary surrounding the grading issue.2 However, such discussions often convey options as only binary—normative versus criterion-based grading; mastery versus performance-based goals. Improvement does not necessitate such an absolute choice or require inflexible grade distributions or exacting assessment tools. A major (yet minimally disruptive) step forward would be the universal standardization of grading options and terminology (e.g., honors, pass, fail), followed by school-specific transparency of grade distribution and description of achieved competencies for each level. Such changes still allow flexibility for clerkship-specific assessment tools, desired competencies, and non-normative grade distributions.

It is hard to argue that our present system should not evolve. Ask yourself, if a graduate told you her medical school grades were “honors,” or “satisfactory,” or “C−,” would you be confident in what that means? What has she mastered? And is this student ready for the next level of training or independent doctoring? Most would answer “I don’t know”—and that is impetus for change.