کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
4278204 | 1611482 | 2016 | 6 صفحه PDF | دانلود رایگان |
• Factor analysis was performed on resident assessment and faculty evaluation tools.
• Despite the complexity of these tools, they provided very little discrimination between characteristics.
• Faculty was evaluated based on ratings in only 2 areas: clinical care and interpersonal skills.
• Resident assessment consisted of only one underlying component—overall resident competency.
BackgroundIncreasing focus on more granular assessment in medical education has led to more lengthy instruments, with concern that the increased complexity undermines the utility of these tools. This study evaluated the relative contribution of individual questions in an assessment of resident performance and a faculty performance evaluation by residents.MethodsThe authors performed factor analysis on the individual items in the resident assessment instrument (3,009 assessments of 71 residents) and faculty evaluations (7,328 evaluations of 61 faculty) collected from 2006 to 2012.ResultsFactor analysis of the resident assessment tool revealed that 1 component was responsible for 96.6% of the variance. This component encompassed each question from the assessment form, and could also be termed “overall resident competency.” Factor analysis of the attending evaluation form revealed 2 unique components, representing “clinical care” and “interpersonal skills,” which accounted for 89.9% of variance.ConclusionsThree components accounted for 90% to 97% of the observed variance in our analysis. Factor analysis represents a useful strategy for analyzing the utility of data obtained from individual items in the assessment and evaluation instruments.
Journal: The American Journal of Surgery - Volume 211, Issue 6, June 2016, Pages 1158–1163