|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|344232||617356||2014||8 صفحه PDF||سفارش دهید||دانلود رایگان|
The recent article in this journal “State-of-the-art automated essay scoring: Competition results and future directions from a United States demonstration” by Shermis ends with the claims: “Automated essay scoring appears to have developed to the point where it can consistently replicate the resolved scores of human raters in high-stakes assessment. While the average performance of vendors does not always match the performance of human raters, the results of the top two to three vendors was consistently good and occasionally exceeded human rating performance.” These claims are not supported by the data in the study, while the study's raw data provide clear and irrefutable evidence that Automated Essay Scoring engines grossly and consistently over-privilege essay length in computing student writing scores. The state-of-the-art referred to in the title of the article is, largely, simply counting words.
Journal: Assessing Writing - Volume 21, July 2014, Pages 104–111