Abstract

Many surgical assessment metrics have been developed to identify and rank surgical expertise; however,some of these metrics (e.g.,economy of motion) can be difficult to understand and do not coach the user on how to modify behavior. We aim to standardize assessment language by identifying key semantic labels for expertise. We chose six pairs of contrasting adjectives and associated a metric with each pair (e.g.,fluid/viscous correlated to variability in angular velocity). In a user study,we measured quantitative data (e.g.,limb accelerations,skin conductivity,and muscle activity),for subjects (n=3,novice to expert) performing tasks on a robotic surgical simulator. Task and posture videos were recorded for each repetition and crowd-workers labeled the videos by selecting one word from each pair. The expert was assigned more positive words and also had better quantitative metrics for the majority of the chosen word pairs,showing feasibility for automated coaching.

Publication series

Other

Other

1st International Workshop on Simulation and Synthesis in Medical Imaging, SASHIMI 2016 held in conjunction with 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2016

title = "Meaningful assessment of surgical expertise: Semantic labeling with data and crowds",

abstract = "Many surgical assessment metrics have been developed to identify and rank surgical expertise; however,some of these metrics (e.g.,economy of motion) can be difficult to understand and do not coach the user on how to modify behavior. We aim to standardize assessment language by identifying key semantic labels for expertise. We chose six pairs of contrasting adjectives and associated a metric with each pair (e.g.,fluid/viscous correlated to variability in angular velocity). In a user study,we measured quantitative data (e.g.,limb accelerations,skin conductivity,and muscle activity),for subjects (n=3,novice to expert) performing tasks on a robotic surgical simulator. Task and posture videos were recorded for each repetition and crowd-workers labeled the videos by selecting one word from each pair. The expert was assigned more positive words and also had better quantitative metrics for the majority of the chosen word pairs,showing feasibility for automated coaching.",

note = "1st International Workshop on Simulation and Synthesis in Medical Imaging, SASHIMI 2016 held in conjunction with 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2016 ; Conference date: 21-10-2016 Through 21-10-2016",

}

TY - GEN

T1 - Meaningful assessment of surgical expertise

T2 - 1st International Workshop on Simulation and Synthesis in Medical Imaging, SASHIMI 2016 held in conjunction with 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2016

AU - Ershad, Marzieh

AU - Koesters, Zachary

AU - Rege, Robert V

AU - Majewicz, Ann

PY - 2016

Y1 - 2016

N2 - Many surgical assessment metrics have been developed to identify and rank surgical expertise; however,some of these metrics (e.g.,economy of motion) can be difficult to understand and do not coach the user on how to modify behavior. We aim to standardize assessment language by identifying key semantic labels for expertise. We chose six pairs of contrasting adjectives and associated a metric with each pair (e.g.,fluid/viscous correlated to variability in angular velocity). In a user study,we measured quantitative data (e.g.,limb accelerations,skin conductivity,and muscle activity),for subjects (n=3,novice to expert) performing tasks on a robotic surgical simulator. Task and posture videos were recorded for each repetition and crowd-workers labeled the videos by selecting one word from each pair. The expert was assigned more positive words and also had better quantitative metrics for the majority of the chosen word pairs,showing feasibility for automated coaching.

AB - Many surgical assessment metrics have been developed to identify and rank surgical expertise; however,some of these metrics (e.g.,economy of motion) can be difficult to understand and do not coach the user on how to modify behavior. We aim to standardize assessment language by identifying key semantic labels for expertise. We chose six pairs of contrasting adjectives and associated a metric with each pair (e.g.,fluid/viscous correlated to variability in angular velocity). In a user study,we measured quantitative data (e.g.,limb accelerations,skin conductivity,and muscle activity),for subjects (n=3,novice to expert) performing tasks on a robotic surgical simulator. Task and posture videos were recorded for each repetition and crowd-workers labeled the videos by selecting one word from each pair. The expert was assigned more positive words and also had better quantitative metrics for the majority of the chosen word pairs,showing feasibility for automated coaching.