Article ID Journal Published Year Pages File Type
344235 Assessing Writing 2014 17 Pages PDF
Abstract

•We compared Automated Essay Scoring and instructor feedback in an ESL classroom.•Feedback on grammar, usage, and mechanics was analyzed and students were surveyed.•Perceived quality of feedback was also evaluated by an additional ESL instructor.•Results showed the instructor provided more quality feedback than the AES system.•Most students trusted AES feedback, yet rated instructor feedback as more valuable.

Writing is an essential component of students’ academic English development, yet it requires a considerable amount of time and effort on the part of both students and teachers. In an effort to reduce their workload, many instructors are looking into the use of Automated Essay Scoring (AES) systems to complement more traditional ways of providing feedback. This paper investigates the use of an AES system in a college ESL writing classroom. Participants included 14 advanced students from various linguistic backgrounds who wrote on three prompts and received feedback from the instructor and the AES system (Criterion). Instructor feedback on the drafts (n = 37) was compared to AES feedback and analyzed both quantitatively and qualitatively across the feedback categories of grammar (e.g., subject-verb agreement, ill-formed verbs), usage (e.g., incorrect articles, prepositions), mechanics (e.g., spelling, capitalization), and perceived quality by an additional ESL instructor. Data were triangulated with opinion surveys regarding student perceptions of the feedback received. The results show large discrepancies between the two feedback types (the instructor provided more and better quality feedback) and suggest important pedagogical implications by providing ESL writing instructors with insights regarding the use of AES systems in their classrooms.

Related Topics
Social Sciences and Humanities Arts and Humanities Language and Linguistics
Authors
, ,