ORIGINAL REPORTSConcordance Between Expert and Nonexpert Ratings of Condensed Video-Based Trainee Operative Performance Assessment
Section snippets
INTRODUCTION
Development as a surgeon requires the acquisition and refinement of operative skills. Technical competence is difficult to define and challenging to measure. Currently, general surgery trainees with the United States must complete at least 6 operative performance evaluations during their training,1 and this requirement may increase in the future. This, however, presents some implementation challenges. Specifically, this increased measurement requirement may not be feasible if the burden is
Audio/Video Capture
Intraoperative audio and video of 6 general surgery procedures (laparoscopic cholecystectomy, laparoscopic colectomy, laparoscopic inguinal hernia repair, open inguinal hernia repair, open ventral hernia repair, and thyroidectomy) (Table 1), were recorded (N = 6 full length videos, 6 condensed videos). All videos used in the current work were captured at a single institution. These specific procedures were chosen based on how common they were and the availability of widely used
RESULTS
A total of 78 assessments were collected across 6 procedures. For each procedure, 8 assessments were completed by evaluation experts and 5 assessments by nonexpert raters (Table 1). Expert raters generally scored performances lower than nonexpert raters using OPRS (-0.16 standard deviations [SD]). In contrast, compared to nonexperts, expert raters scored performances slightly higher with the SIMPL (0.18 SD), ten Cate (0.25 SD), and Zwisch (0.21 SD) tools. None of these differences was
DISCUSSION
In this study, we found no difference between raters with expertise in surgical resident evaluation and those without, and no difference for condensed compared to full-length videos. Our study demonstrated that assessments by surgeon raters who are nonexpert raters appear to be acceptable and not statistically significantly different than evaluations performed by raters who are evaluation experts. Furthermore, high residual variance suggests that the vast majority of the variability in
CONCLUSION
There is no difference in performance assessment scores between full-length and condensed videos. In addition, nonexpert surgeon raters are no different than surgeon raters with expertise in resident evaluation in the assessment of surgical performance. Future studies of operative performance may be facilitated by using nonexpert surgeon raters viewing condensed videos edited using a standardized protocol.
REFERENCES (18)
- et al.
Using GoPro to give video-assisted operative feedback for surgery residents: a feasibility and utility assessment
J Surg Educ
(2018) - et al.
Video-based assessment in surgical education: a scoping review
J Surg Educ
(2019) - et al.
Feasibility, reliability and validity of an operative performance rating system for evaluating surgery residents
Surgery
(2005) - et al.
The feasibility of real-time intraoperative performance assessment with SIMPL (System for Improving and Measuring Procedural Learning): early experience from a multi-institutional trial
J Surg Educ
(2016) - et al.
Reliability, validity, and feasibility of the Zwisch scale for the assessment of intraoperative performance
J Surg Educ
(2014) Protecting patients while advancing education
J Am Coll Surg
(2014)- et al.
Developing the blueprint for a general surgery technical skills certification examination: a validation study
J Surg Educ
(2018) - et al.
A novel method for real-time audio recording with intraoperative video
J Surg Educ
(2015) - et al.
Value and barriers to use of the SIMPL tool for resident feedback
J Surg Educ
(2019)
Cited by (0)
Funding: The project was supported by a grant from the Association of Surgical Education (ASE) and Association of Program Directors in Surgery (APDS).