کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
372731 | 622143 | 2012 | 6 صفحه PDF | دانلود رایگان |
The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of ecological correlation and therefore yield incorrect estimates of reliability. Intraclass reliability and agreement coefficients were proposed as appropriate for making statements about the reliability of SETs in specific classes. An analysis of 1073 course sections using inter-rater coefficients found that students using this particular instrument were generally unable to reliably evaluate faculty. In contrast, the traditional ecologically flawed multi-class “group” reliability coefficients had generally acceptable reliability.
► Claims about the reliability of student evaluations of faculty are based on faulty methodology.
► Student evaluations in this study were generally unreliable.
► The reliability of student evaluations should be routinely assessed.
Journal: Studies in Educational Evaluation - Volume 38, Issue 1, March 2012, Pages 15–20