Article ID Journal Published Year Pages File Type
1083522 Journal of Clinical Epidemiology 2006 4 Pages PDF
Abstract

Background and ObjectiveTo examine the extent of bias introduced to diagnostic test validity research by the use of post hoc data driven analysis to generate an optimal diagnostic cut point for each data set.MethodsAnalysis of simulated data sets of test results for diseased and nondiseased subjects, comparing data driven to prespecified cut points for various sample sizes and disease prevalence levels.ResultsIn studies of 100 subjects with 50% prevalence a positive bias of five percentage points of sensitivity or specificity was found in 6 of 20 simulations. For studies of 250 subjects with 10% prevalence a positive bias of 5% was observed in 4 of 20 simulations.ConclusionThe use of data-driven cut points exaggerates test performance in many simulated data sets, and this bias probably affects many published diagnostic validity studies. Prespecified cut points, when available, would improve the validity of diagnostic test research in studies with less than 50 cases of disease.

Related Topics
Health Sciences Medicine and Dentistry Public Health and Health Policy
Authors
,