|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|344249||617358||2015||16 صفحه PDF||سفارش دهید||دانلود رایگان|
• We adapted the Jacobs et al. (1981) rubric based on theoretical and empirical reasons.
• We scored 80 essays using both the original and revised rubric.
• Rasch measurement and profile analysis assessed both rubrics’ functioning.
• Rater interviews helped us understand the original and revised rubrics’ function.
Because rubrics are the foundation of a rater's scoring process, principled rubric use requires systematic review as rubrics are adopted and adapted (Crusan, 2010, p. 72) into different local contexts. However, detailed accounts of rubric adaptations are somewhat rare. This article presents a mixed-methods (Brown, 2015) study assessing the functioning of a well-known rubric (Jacobs, Zinkgraf, Wormuth, Hartfiel, & Hugley 1981, p. 30) according to both Rasch measurement and profile analysis (n = 524), which were respectively used to analyze the scale structure and then to describe how well the rubric was classifying examinees. Upon finding that there were concerns about a lack of distinction within the rubric's scale structure, the authors decided to adapt this rubric according to theoretical and empirical criteria. The resulting scale structure was then piloted by two program outsiders and analyzed again according to Rasch measurement, placement being measured by profile analysis (n = 80). While the revised rubric can continue to be fine-tuned, this study describes how one research team developed an ongoing rubric analysis, something that these authors recommend be developed more regularly in other contexts that use high-stakes performance assessment.
Journal: Assessing Writing - Volume 26, October 2015, Pages 51–66