Article ID Journal Published Year Pages File Type
344198 Assessing Writing 2016 10 Pages PDF
Abstract

•Previous research focused on rater errors or content theories of writing.•We examine the impact of response content on the ease of scoring essays accurately.•Essay length and lexical diversity influence ease of scoring accurately.•Future research should consider additional features of the rating context.•Communication of these results could improve rater training and monitoring.

Previous research that has explored potential antecedents of rater effects in essay scoring has focused on a range of contextual variables, such as rater background, rating context, and prompt demand. This study predicts the difficulty of accurately scoring an essay based on that essay's content by utilizing linear regression modeling to measure the association between essay features (e.g., length, lexical diversity, sentence complexity) and raters’ ability to assign scores to essays that match those assigned by expert raters. We found that two essay features – essay length and lexical diversity – account for 25% of the variance in ease of scoring measures, and these variables are selected in the predictive modeling whether the essay's true score is included in the equation or not. We suggest potential applications for these results to rater training and monitoring in direct writing assessment scoring projects.

Related Topics
Social Sciences and Humanities Arts and Humanities Language and Linguistics
Authors
, , ,