Article ID Journal Published Year Pages File Type
6834569 Computers & Education 2018 24 Pages PDF
Abstract
To grade open-response answers in a massive course is an important task that cannot be handled without the assistance of an intelligent system able to extend the abilities of experts. A peer assessment method may be used for this. The students who wrote the answers also play the role of graders for a reduced set of answers provided by other students. The grades thus obtained should be aggregated to provide a reasonable overall grade for each answer. However, these systems present two clear disadvantages for students: they increase their already heavy workload, and the grades that students finally receive lack feedback explaining the reasons for their scores. The contribution of this paper comprises a proposal to overcome these shortcomings. The students acting as graders are asked to evaluate a number of different aspects. One of them is the overall grade, but there are other annotations that can be included to explain the overall grade. Moreover, we represent the responses given by the students (text documents) as the inputs in a learning task, in which the outputs are the aspects to be assessed (labels with an ordinal level). Our proposal is to learn all these labels at once employing a multitask approach that uses matrix factorization. The method presented in this paper shows that peer assessment can provide feedback and can additionally be extended to grade the responses of students not involved in the peer assessment loop, thus significantly reducing the burden on students. We present the details of the method, as well as a number of experiments carried out using three data sets obtained from courses belonging to different fields at our university.
Related Topics
Social Sciences and Humanities Social Sciences Education
Authors
, , ,