Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6902018 | Procedia Computer Science | 2017 | 4 Pages |
Abstract
The most effective technique for improving students writing skills is for them to get immediate instructor's feedback and as often as possible. This, however, significantly expands the workload of the instructors. There is a growing need for automated systems to help students draft essays. Automated essay evaluation is increasingly popular in the field of educational evaluation technology. In this work, we present an automatic evaluator of student essays in Arabic language, a system that is modeled on the scheme followed by the school-teachers in Riyadh, the capital of Saudi Arabia. The main criteria for assessing the essays is: language proficiency; the structure of the essay; and the content which should match the topic. With this in mind, we developed a scheme that relies on latent semantic analysis, and rhetorical structure theory. The system was tested on over 300 different essays, all handwritten by schoolchildren covering various subjects. The performance was measured by machine-human correlation in grading. Our system achieved an overall correlation of 0.79 with the teachers' evaluation.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)
Authors
Maram F. Al-Jouie, Aqil M. Azmi,