کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6831568 1434317 2017 10 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Design and evaluation of automated writing evaluation models: Relationships with writing in naturalistic settings
ترجمه فارسی عنوان
طراحی و ارزیابی مدل های ارزیابی نوشتن خودکار: روابط با نوشتن در تنظیمات طبیعی
کلمات کلیدی
نمره خودکار مقاله، روایی آزمون مقاله
موضوعات مرتبط
علوم انسانی و اجتماعی علوم انسانی و هنر زبان و زبان شناسی
چکیده انگلیسی
Automated Writing Evaluation (AWE)systems are built by extracting features from a 30 min essay and using a statistical model that weights those features to optimally predict human scores on the 30 min essays. But the goal of AWE should be to predict performance in real world naturalistic tasks, not just to predict human scores on 30 min essays. Therefore, a more meaningful way of creating the feature weights in the AWE model is to select weights that are optimized to predict the real world criterion. This unique new approach was used in a sample of 194 graduate students who supplied two examples of their writing from required graduate school coursework. Contrary to results from a prior study predicting portfolio scores, the experimental model was no more effective than the traditional model in predicting scores on actual writing done in graduate school. Importantly, when the new weights were evaluated in large samples of international students, the population subgroups that were advantaged or disadvantaged by the new weights were different from the groups advantaged/disadvantaged by the traditional weights. It is critically important for any developer of AWE models to recognize that models that are equally effective in predicting an external criterion may advantage/disadvantage different groups.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Assessing Writing - Volume 34, October 2017, Pages 62-71
نویسندگان
, ,