کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | ترجمه فارسی | نسخه تمام متن |
---|---|---|---|---|---|
344221 | 617355 | 2014 | 24 صفحه PDF | سفارش دهید | دانلود رایگان |
• National demonstration and public competition on automated essay scoring.
• Divided over 17,000 high-stakes essays into two sets: training and test.
• Machines had comparable performance on five of seven criterion measures.
• With additional validity studies, machine scoring may play a role in high-stakes essay assessment.
• Need to better articulate writing dimensions for comparisons to be more meaningful.
This article summarizes the highlights of two studies: a national demonstration that contrasted commercial vendors’ performance on automated essay scoring (AES) with that of human raters; and an international competition to match or exceed commercial vendor performance benchmarks. In these studies, the automated essay scoring engines performed well on five of seven measures and approximated human rater performance on the other two. With additional validity studies, it appears that automated essay scoring holds the potential to play a viable role in high-stakes writing assessments.
Journal: Assessing Writing - Volume 20, April 2014, Pages 53–76