Article ID Journal Published Year Pages File Type
372786 Studies in Educational Evaluation 2007 13 Pages PDF
Abstract

This study examined the use of generalizability theory to evaluate the quality of an alternative assessment (journal writing) in mathematics. Twenty-nine junior college students wrote journal tasks on the given topics and two raters marked the tasks using a scoring rubric, constituting a two-facet G-study design in which students were crossed with tasks and raters. The G coefficient was .76 and index of dependability was .72. The results showed that increasing the number of tasks had a larger effect on the G coefficient and index of dependability than increasing the number of raters. Implications for educational practices are discussed.

Related Topics
Social Sciences and Humanities Social Sciences Education