Article ID Journal Published Year Pages File Type
517166 International Journal of Medical Informatics 2008 9 Pages PDF
Abstract

ObjectiveTo obtain an overview of study designs and study methods used in research evaluating IT in health care, to present a list of quality criteria by which all kinds of reported evaluation studies on IT systems in health care can be assessed, and to assess the quality of reported evaluation studies on IT in health care and its development over time (1982–2005).MethodsA generic 10-item list of quality indicators was developed based on existing literature on quality of medical and medical informatics publications. It is applicable to all kind of IT evaluation papers and not restricted to randomized controlled trials. One hundred and twenty explanatory papers evaluating the effects of an IT system in health care published between 1982 and 2005 were randomly selected from PubMed, the study designs and study methods were extracted, and the quality indicators were used to assess the quality of each paper by two independent raters.ResultsThe inter-rater variability of scoring the 10 quality indicators as assessed by a pre-test with nine papers was good (K = 0.87). There was a trend towards more multi-centre studies and authors coming more frequently from various departments. About 70% of the studies used a design other than a randomized controlled trial (RCT). Forty percent of the studies combined at least two different data acquisition methods. The quality of IT evaluation papers, as defined by the quality indicators, was only slightly improving in time (Spearman correlation coefficient [rs] = 0.19). The quality of RCTs publications was significantly higher than the quality of non-RCT studies (p < 0.001).ConclusionThe continuous and dominant number of non-RCT studies reflects the various approaches applicable to evaluate IT systems in health care. Despite the increasing discussion on evidence-based health informatics, the quality of published evaluation studies on IT interventions in health care is still insufficient in some aspects. Journal editors and referees should take care that reports of evaluation on IT systems contain all aspects needed for a sufficient understanding and reproducibility of a paper. Publication guidelines should be developed to support more complete and better publications of IT evaluation papers.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, ,