Article ID Journal Published Year Pages File Type
1074856 Health Outcomes Research in Medicine 2011 9 Pages PDF
Abstract

BackgroundThere are many checklists and validity scales available to assess the quality of systematic reviews and their content; New Zealand Guidelines Group is a not-for-profit organization that uses the Graphical Appraisal Tool for Epidemiology (GATE), a critical appraisal tool developed by the Effective Practice, Informatics and Quality Improvement collaboration in New Zealand, in guideline development. The objectives of this study were to test the interobserver reliability of individual items on the GATE systematic review checklist and to document reviewers’ experiences of using GATE in order to modify the checklist.MethodsTwo reviewers independently completed a GATE systematic review checklist for each study from a sample of 10 systematic reviews included in clinical practice guidelines. Agreement between reviewers was calculated for each item on the GATE checklist using percentage agreement; kappa, prevalence-adjusted bias-adjusted kappa (PABAK), and reviewers’ experiences of using the tool were documented. The GATE tool was modified based on reviewers’ agreement.ResultsCrude agreement between reviewers on individual GATE items ranged from 55% to 100%, with a median score of 73%. Interrater reliability was variable across individual items, ranging from a PABAK score of 0.09 (poor) to 1 (perfect), with a median score of 0.455 (moderate). Agreement and reliability were both highest for interpretation of subgroup analyses and summary scores of internal validity. Lowest scores related to individual items assessing reproducibility, publication bias, precision of results, and applicability. Agreement on the overall summary score was rated “good,” with 82% agreement and a PABAK score of 0.636. Following the appraisals, 7 question items on the GATE framework were amended and one question was deleted. In the accompanying notes, 12 changes were made.ConclusionsThe amended GATE checklist demonstrates clearer and easier-to-follow notes for appraising systematic reviews. This study demonstrates how the usability of critical appraisal checklists can be adapted through a formal evaluation process that could be undertaken alongside critiquing evidence.

Related Topics
Health Sciences Medicine and Dentistry Health Informatics
Authors
, ,