Article ID Journal Published Year Pages File Type
4309278 Surgery 2010 9 Pages PDF
Abstract

BackgroundValidating assessment tools in surgical simulation training is critical to objectively measuring skills. Most reviews do not elicit methodologies for conducting rigorous validation studies. Our study reports current methodological approaches and proposes benchmark criteria for establishing validity in surgical simulation studies.MethodsWe conducted a systematic review of studies establishing validity. A PubMed search was performed with the following keywords: “validity/validation,” “simulation,” “surgery,” and “technical skills.” Descriptors were tabulated for 29 methodological variables by 2 reviewers.ResultsA total of 83 studies were included in the review. Of these studies, 60% targeted construct, 24% targeted concurrent, and 5% looked at predictive validity. Less than half (45%) of all the studies reported reliability data. Most studies (82%) were conducted in a single institution with a mean of 37 subjects recruited. Only half of the studies provided rationale for task selection. Data sources included simulator-generated measures (34%), performance assessment by human evaluators (33%), motion tracking (6%), and combined modes (28%). In studies using human evaluators, videotaping was a common (48%) blinding technique; however, 34% of the studies did not blind evaluators. Commonly reported outcomes included task time (86%), economy of motion (51%), technical errors (48%), and number of movements (25%).ConclusionThe typical validation study comes from a single institution with a small sample size, lacks clear justification for task selection, omits reliability reporting, and poses potential bias in study design. The lack of standardized validation methodologies creates challenges for training centers that survey the literature to determine the appropriate method for their local settings.

Related Topics
Health Sciences Medicine and Dentistry Surgery
Authors
, , , , , ,