کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
344254 | 617359 | 2015 | 15 صفحه PDF | دانلود رایگان |
• This paper presents a new approach towards marking large-scale complex assessments.
• The system offers a fundamentally different approach compared to automated scoring.
• The tool can facilitate complex tasks and increase assessment credibility in MOOCs.
Currently, complex tasks incur significant costs to mark, becoming exorbitant for courses with large number of students (e.g., in MOOCs). Large scale assessments are currently dependent on automated scoring systems. However, these systems tend to work best in assessments where correct responses can be explicitly defined. There is considerable scoring challenge when it comes to assessing tasks that require deeper analysis and richer responses.Structured peer-grading can be reliable, but the diversity inherent in very large classes can be a weakness for peer-grading systems because it raises objections that peer-reviewers may not have qualifications matching the level of the task being assessed. Distributed marking can offer a solution to handle both the volume and complexity of these assessments.We propose a solution wherein peer scoring is assisted by a guidance system to improve peer-review and increase the efficiency of large scale marking of complex tasks. The system involves developing an engine that automatically scaffolds the target paper based on predefined rubrics so that relevant content and indicators of higher level thinking skills are framed and drawn to the attention of the marker. Eventually, we aim to establish that the scores produced are comparable to scores produced by expert raters.
Journal: Assessing Writing - Volume 24, April 2015, Pages 1–15