کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
360178 | 620439 | 2015 | 13 صفحه PDF | دانلود رایگان |
• Rater-to-student feedback was analysed to identify clearer rating scale descriptors.
• Feedback showed consensus on many rating decisions not covered in the original scale.
• New raters preferred the feedback content-informed scale over the existing scale.
• Feedback content analysis offers an efficient, unobtrusive scale development method.
This article introduces the use of tutor-to-student summative feedback comments to support rating scale development. The application of this method in a large-scale English for academic purposes course is presented, with the hope that it may prove useful for other practitioners. When assessing written coursework, it is common for educational programmes to produce summative feedback comments, to be reported to students alongside their grades. Many institutions have records of such comments extending back several years. These feedback comments represent the thoughts of a range of raters while rating a range of performance samples - the type of data that can be used in rating scale development. In this study, a sample of 150 comments was analysed to create new descriptors for one rating scale category. The sampled comments were coded into evaluative statements, and evaluative statements showing rater consensus within each band were used to write new scale descriptors. The resultant scale was compared against the existing scale in interviews, with raters expressing general preference for the former. Thus “feedback content analysis” offers another tool for performance data-based scale development, capturing experienced raters' opinions on a large number of performances, with practical advantages over the typical “rater workshop” method.
Journal: Journal of English for Academic Purposes - Volume 18, June 2015, Pages 51–63