Article ID Journal Published Year Pages File Type
516472 International Journal of Medical Informatics 2006 7 Pages PDF
Abstract

ObjectiveTo evaluate the accuracy of an automated algorithm for scoring physicians’ responses to open-ended clinical vignettes against explicit, evidence-based quality criteria.MethodsOne hundred sixteen physicians completed a total of 915 computerized clinical vignettes at 4 sites. Each vignette simulated an outpatient primary care visit for one of 8 different clinical cases. The automated algorithm scored disease-specific quality criterion as done or not done by recognizing the presence or absence of predefined patterns in the physician's text response to the vignette. Scores generated by the automated algorithm for each criterion were compared to scores generated by trained human abstractors.Vignette responses were divided into development and test sets. Percentage agreement between automated and manual scores was computed separately for the development and test sets. Sensitivity and specificity were calculated. Costs of automated and manual scoring were compared.ResultsAccuracy of the algorithm exceeds 90% for both the development and test sets, and is high for care items that were deemed either necessary or unnecessary, across diverse clinical cases, and for all domains of the outpatient clinical encounter. The sensitivity of the automated scoring algorithm is 89.0%, and specificity is 93.5%. Automated scoring is approximately 84% less expensive than manual scoring.ConclusionAutomated scoring of computerized vignettes appears feasible and accurate. Computerized vignettes incorporating accurate automated scoring offer the promise of a highly standardized but relatively inexpensive measurement tool for a wide range of quality assessments within and across health systems.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, , ,