کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
3908773 | 1251193 | 2012 | 5 صفحه PDF | دانلود رایگان |

We analysed intra- and inter-rater agreement of subjective third party assessment and agreement with a semi-automated objective software evaluation tool (BCCT.core).We presented standardized photographs of 50 patients, taken shortly and one year after surgery to a panel of five breast surgeons, six breast nurses, seven members of a breast cancer support group, five medical and seven non-medical students. In two turns they rated aesthetic outcome on a four point scale. Moreover the same photographs were evaluated by the BCCT.core software.Intra-rater agreement in the panel members was moderate to substantial (k = 0.4–0.5; wk = 0.6–0.7; according to different subgroups and times of assessment). In contrast inter-rater agreement was only slight to fair (mk = 0.1–0.3). Agreement between the panel participants and the software was fair (wk = 0.24–0.45).Subjective third party assessment only fairly agree with objective BCCT.core evaluation just as third party participants do not agree well among each other.
Journal: The Breast - Volume 21, Issue 1, February 2012, Pages 61–65