Debriefing and Simulation Trail Mix – a little bit of a bunch of things

13 teams – randomized to Self Directed (described in this paper as allowing students to proceed through the scenario with little or no input from the instructor until after the scenario) learning with facilitated debriefing vs. instructor modeled learning (defined in this paper as instructors model the appropriate responses and interventions during a team simulated clinical scenario while the students or participants observe – with the experts verbalizing what they are doing and why it works in the scenario — this act then followed by students participating in a simulation) with modified debriefing afterwards. Team content (all seemingly novice students – NP, RN, RT, and social work). Reported debriefing was performed by someone trained in debriefing through a comprehensive simulation workshop and debriefing workshop – not stated what exact question styles were used.

Questions from me? People are generally happier when the explicitly know what is expected of them.Did the IML just teach to the test/evaluation? Is that necessarily a bad thing? Does this work better for novice vs. expert learners? How do we model what we want students to do? Video, other fashions, simprov. Personally I have had our residents watch videos of teams prior to coming into our simulations – crisiscode.org (shoutout! to the Stanford AIM LAB http://aim.stanford.edu/ )

First – What is Directed Self Regulated Learning? Well ” It requires a knowledgeable educator to design practice conditions using validated learning principles. A trainee then steps into this structured setting and is given a limited control of a specific aspect of practice and therefore is metacognitively, behaviorally, and motivationally active in the learning experience.”

Instructor regulated learning as described in their paper is similar to what many centers do for procedural group training 1 instructor, 4 residents, everybody gets a turn/gets to demonstrate skill – then you move on.

45 PGY1 Internal Medicine Residents – Standard pre-training curriculum /standard testing/questionnaires pre/post educational intervention – assigned randomly to Directed Self Regulated Learning (DSRL – solo participant could choose to progress from easy to hard LP simulator models on their own time in a 35 minute time frame, could play pre training video ad lib during training session – then take the post test – then get 15 minutes for feedback)) vs. Instructor regulated learning (IRL) for teaching lumbar puncture using simulation (instructor ratio 1:4, did not have access to video during session – as instructor was resource, instructor and students collectively decided when to progress from easy to hard – still within a 35 minute time frame for the 4 students (as opposed to 1) – then take post test – then get 15 minutes feedback as a group). Then everybody gets a 3 month retention test.

Both post simulation and in-simulation debriefings have been used in medical simulation, the question remains which method is more effective? The authors compared these two styles of debriefing with 161 third year medical students participating in ACLS simulations. A retrospective pre-post assessment was completed by the students on “self reported confidence, level of knowledge related to medical resuscitation and the simulation itself.”

The post simulation debrief group gave higher rankings for “effectiveness of the debriefing style, debriefing leading to effective learning, and understanding of correct and incorrect actions.” These results were statistically significant, and students rated this method as more effective overall.

The discussion noted advantages and disadvantages of each method and its effect on the quality of learning. Their students however “did not feel that interruptions during a simulation significantly altered the realism of the simulation.”

Limitations of the study noted that this is one level of learners in one type of simulation, thus the inability to generalize to other levels of learners or other types of simulations. It did not include clinical outcome data nor did it specify the amount of time spent in simulation versus debriefing for each group (total length of time =20 min).

Dieckmann et all discuss the need to build into scenarios the possibility that learners will not proceed down the paths that the instructor considers possible or likely. This might result from altered learner comprehension of the meanings and clues within a scenario, inability to accept the scenario as plausible, or mismatch between the scenario difficulty and the learner’s ability. These situations might compromise the potential for learning in a simulation.

“Life savers” attempt to rescue the scenario and make it meaningful and relevant for the learner. These life savers can be brought into the scenario from within or outside the scenario. Examples of altering the scenario from within would include having a confederate administer a drug that is necessary to the progress of the scenario when the learners have failed to do it, or manipulating vital signs on the fly from the control room to make manikin status changes more or less obvious. Examples of altering the scenario from outside would include speaking via the overhead speaker into a scenario to stop participants from doing an action that would be harmful to themselves or to the manikin, or stopping and restarting a scenario to recover from control room or technology mishaps.

Life savers that are employed by a role player from within a scenario must follow the logic of the scenario. They must make sense to the learner in the context of the situation. By contrast, life savers brought from outside the scenario do not, and in some cases can intentionally disrupt the scenario.

This has implications for both scenario design and prebriefing. During design of scenarios, instructors should consider the potential need for life savers, and how they might be implemented if necessary. The learners should be prepared for this possibility during the prebrief.