Legal action is a common concern, as 75% of doctors who practice in an emergency department, on average, are sued for malpractice at some point during their careers.

According to results from a recent simulation study performed by Schoenfeld and colleagues to determine how 3 levels of shared decision making can influence the likelihood of patients taking legal action for adverse outcomes, the potential risks are low for shared decision making in emergency department medicine, and the potential benefits are high. Nonetheless, there are limitations, including the inability to generalize findings to additional scenarios, and biases that derive from not accounting for subpopulations that would influence the most appropriate approach, according to an editorial published in Annals of Emergency Medicine. In particular, the editorial’s author pointed out the danger of creating algorithms according to clinical research studies that could have an unrecognized initial sampling bias.

Schoenfeld and colleagues performed a simulation study examining 3 levels of shared decision making (none, brief, and thorough) by using crowd-sourced simulation with nonphysician respondents. The authors then determined how these levels affected the likelihood of patients taking legal action in response to adverse outcomes resulting from emergency department treatment. Legal action is a widespread concern, as approximately 75% of emergency doctors will face a malpractice claim at some point. The authors indicated support for using shared decision making in situations where patients can participate in the decision-making process to reduce medical malpractice suits.

Schoenfeld and colleagues presented a set of vignettes designed to draw out realistic responses and decisions from participants, who were found through the crowd-sourcing platform Amazon Mechanical Turk. The participant pool of Mechanical Turk is larger and more diverse than the undergraduate populations often used for this kind of research, and other researchers have shown that conducting behavioral experiments with online labor markets have external and internal validity comparable to findings obtained in field and laboratory experiments.

Schoenfeld and colleagues suggested that in the vignettes used, shared decision making could lead to fewer complaints and malpractice suits, although personal dynamics and communication could break down this conclusion in actual situations with emotional stakes that interfere with patients and providers rationally engaging in the process. The simplification required to explain a fixed vignette may undermine the complexity of human dynamics that would color the effectiveness of shared decision making in a real-life situation.

The editorial author stated that despite the many strengths of the study’s novel approach, the findings may not be generalizable to additional scenarios because of unrecognized biases that could lead to smaller subpopulations being unrepresented and uncaptured. The best approaches to shared decision making may be related to education, culture, socioeconomics, or race, and crowd-sourced participants may not always represent these crucial factors.

Furthermore, any algorithms developed from such research could have an inherent bias that “may ultimately disenfranchise subgroups and minority populations. Identifying potential sources of bias during study design can prevent its occurrence. If a known demographic profile needs to be achieved, it is possible on a crowd-sourcing platform to structure recruitment to meet the desired profile.”

Summarizing that Schoenfeld and colleagues’ simulation study supports the value of shared decision making, which appears to be low-risk with the potential for great benefits, The editorial author concluded, “So, too, with appropriate attention to design, recruitment, and bias, may be the potential for crowd-sourced behavioral research in medicine.”