کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
4279534 | 1611535 | 2012 | 6 صفحه PDF | دانلود رایگان |
![عکس صفحه اول مقاله: Assessment of medical student clinical reasoning by “lay” vs physician raters: inter-rater reliability using a scoring guide in a multidisciplinary objective structured clinical examination Assessment of medical student clinical reasoning by “lay” vs physician raters: inter-rater reliability using a scoring guide in a multidisciplinary objective structured clinical examination](/preview/png/4279534.png)
BackgroundTo determine whether a “lay” rater could assess clinical reasoning, interrater reliability was measured between physician and lay raters of patient notes written by medical students as part of an 8-station objective structured clinical examination.MethodsSeventy-five notes were rated on core elements of clinical reasoning by physician and lay raters independently, using a scoring guide developed by physician consensus. Twenty-five notes were rerated by a 2nd physician rater as an expert control. Kappa statistics and simple percentage agreement were calculated in 3 areas: evidence for and against each diagnosis and diagnostic workup.ResultsAgreement between physician and lay raters for the top diagnosis was as follows: supporting evidence, 89% (κ = .72); evidence against, 89% (κ = .81); and diagnostic workup, 79% (κ = .58). Physician rater agreement was 83% (κ = .59), 92% (κ = .87), and 96% (κ = .87), respectively.ConclusionsUsing a comprehensive scoring guide, interrater reliability for physician and lay raters was comparable with reliability between 2 expert physician raters.
Journal: The American Journal of Surgery - Volume 203, Issue 1, January 2012, Pages 81–86