Article ID Journal Published Year Pages File Type
344457 Assessing Writing 2007 18 Pages PDF
Abstract

The training of raters for writing assessment through web-based programmes is emerging as an attractive and flexible alternative to the conventional method of face-to-face training sessions. Although some online training programmes have been developed, there is little published research on them. The current study aims to compare the effectiveness of online and face-to-face training in the context of a large-scale academic writing assessment for students entering a major English-medium university. A team of 16 raters, divided into two groups of 8, all initially rated a set of 70 scripts. In the training phase, the online group rated 15 benchmark scripts online and received immediate feedback, whereas the face-to-face group received individual feedback on their pre-training performance, rated the 15 scripts at home and then met for a face-to-face session. After the training, both groups re-rated the initial 70 scripts and then reported their attitudes towards the different forms of training by means of questionnaires and interviews. According to the statistical results, using multi-faceted Rasch measurement, both types of training were effective overall, but the self-report data revealed various responses favouring one type or the other. The findings are discussed in terms of the factors influencing rater responsiveness and the refinements that are needed for future rater training programmes.

Related Topics
Social Sciences and Humanities Arts and Humanities Language and Linguistics
Authors
, , ,