کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
344248 | 617358 | 2015 | 13 صفحه PDF | دانلود رایگان |
• Raters have more difficulty making reading abilities explicit than writing and language with an EBB rating scale.
• Instructors gain curricular understanding while developing a placement EBB rating scale.
• Factors external to the source data influence the development of an EBB rating scale.
Integrated reading-to-write (RTW) tasks have increasingly taken the place of independent writing-only tasks in assessing academic literacy; however, previous research has rarely investigated the development and use of rating scales to interpret and score test takers’ performance on such tasks. This study investigated how four highly experienced ESL instructors developed an empirically derived, binary choice, boundary definition (EBB) rating scale. EBB scales are known to be reliable and effective for assessing specific writing tasks administered for a single population. Nonetheless, evidence suggests that factors outside the curriculum also influence the criteria which shape an EBB scale and thus final placement scores. Analysis of the recorded deliberations provides evidence of instructors’ conceptualizations of reading, writing, and language in the RTW task although each is not equally transparent in the EBB rating scale developed. Understanding the task and the curriculum as well as considering the future training of raters were additional challenges in designing this EBB scale. Despite such challenges, an EBB rating scale has potential to help us better understand the relative contribution of hybrid constructs to the overall quality of RTW task performance and to enhance the linkages among teaching, rating, and future rater-training.
Journal: Assessing Writing - Volume 26, October 2015, Pages 38–50