Article ID Journal Published Year Pages File Type
2464718 The Veterinary Journal 2011 5 Pages PDF
Abstract

Veterinary clinical and epidemiological investigations demand observer reliability. Kappa (κ) statistics are often used to adjust the observed percentage agreement according to that expected by chance. In highly homogenous populations, κ ratings can be poor, despite percentage agreements being high, because the probability of chance agreement is also high. Veterinary researchers are often unsure how to interpret these ambiguous results. It is suggested that prevalence indices (PIs), reflecting the homogeneity of the sample, should be reported alongside percentage agreements and κ values. Here, a published PI calculation is extended, permitting extrapolation to situations involving three or more observers. A process is proposed for classifying results into those that do and do not attain clinically useful ratings, and those tested on excessively homogenous populations and which are therefore inconclusive. Pre-selection of balanced populations, or adjustment of scoring thresholds, can help reduce population homogeneity. Reporting PIs in observer reliability studies in veterinary science and other disciplines enables reliability to be interpreted usefully and allows results to be compared between studies.

Related Topics
Life Sciences Agricultural and Biological Sciences Animal Science and Zoology
Authors
, ,