Article ID Journal Published Year Pages File Type
3858554 The Journal of Urology 2016 8 Pages PDF
Abstract

PurposeStandardized assessment of laparoscopic skill in urology is lacking. We investigated whether the AUA (American Urological Association) BLUS (Basic Laparoscopic Urologic Skills) skill tasks are valid to address this need.Materials and MethodsThis institutional review board approved study included 27 medical students, 42 urology residents, 18 fellows and 37 faculty urologists across 8 sites. Using the EDGE (Electronic Data Generation and Evaluation) device (Simulab, Seattle, Washington) 454 recordings were collected on peg transfer, pattern cutting, suturing and clip applying tasks, which together comprise the expert determined BLUS tasks. We collected synchronized video and tool motion data for each trial. For each task errors, time, path length, economy of motion, peak grasp force and EDGE score were collected. An expert panel of 5 faculty members performed GOALS (Global Objective Assessment of Laparoscopic Skills) evaluations on a representative subset of peg transfer and suturing skill tasks performed by 24 participants (IRR = 0.95).ResultsDemographically derived skill levels proved unsuitable to evaluate construct validity. Separation of mean scores by grouped skill levels was strongest for the suturing task. Objective motion metrics and errors supported construct validity vis-à-vis correlation with blinded expert video ratings (motion metrics R2 = 0.95, p <0.01). Expert scores appeared to reward errors in suturing but not in block transfer.ConclusionsBLUS skill task performance scoring can discriminate among basic laparoscopic technical skill levels. Self-reported demographics are an unreliable source of determining laparoscopic technical skill.

Related Topics
Health Sciences Medicine and Dentistry Nephrology
Authors
, , , , , , , , , , , , , ,