Article ID Journal Published Year Pages File Type
517507 International Journal of Medical Informatics 2005 9 Pages PDF
Abstract

SummaryPurpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites.Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK).Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement ≥ 0.6).Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, , , , ,