Article ID Journal Published Year Pages File Type
6831568 Assessing Writing 2017 10 Pages PDF
Abstract
Automated Writing Evaluation (AWE)systems are built by extracting features from a 30 min essay and using a statistical model that weights those features to optimally predict human scores on the 30 min essays. But the goal of AWE should be to predict performance in real world naturalistic tasks, not just to predict human scores on 30 min essays. Therefore, a more meaningful way of creating the feature weights in the AWE model is to select weights that are optimized to predict the real world criterion. This unique new approach was used in a sample of 194 graduate students who supplied two examples of their writing from required graduate school coursework. Contrary to results from a prior study predicting portfolio scores, the experimental model was no more effective than the traditional model in predicting scores on actual writing done in graduate school. Importantly, when the new weights were evaluated in large samples of international students, the population subgroups that were advantaged or disadvantaged by the new weights were different from the groups advantaged/disadvantaged by the traditional weights. It is critically important for any developer of AWE models to recognize that models that are equally effective in predicting an external criterion may advantage/disadvantage different groups.
Related Topics
Social Sciences and Humanities Arts and Humanities Language and Linguistics
Authors
, ,