کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
344368 | 617375 | 2012 | 20 صفحه PDF | دانلود رایگان |

The purpose of this study was to investigate the intra-rater and inter-rater reliability of the Critical Thinking Analytic Rubric (CTAR). The CTAR is composed of 6 rubric categories: interpretation, analysis, evaluation, inference, explanation, and disposition. To investigate inter-rater reliability, two trained raters scored four sets of performance-based student work samples derived from a pilot study and subsequent larger study. The two raters also blindly scored a subset of student work samples a second time to investigate intra-rater reliability. Participants in this study were high school seniors enrolled in a college preparation course. Both raters showed acceptable levels of intra-rater reliability (α ≥ 0.70) in five of the six rubric categories. One rater showed poor consistency (α = 0.56) for the analysis category of the rubric, while the other rater showed excellent consistency (α = 0.91) for the same category suggesting the need for further training of the former rater. The results of the inter-rater reliability investigation demonstrate acceptable levels of consistency (α ≥ 0.70) in all rubric categories. This investigation demonstrated that the CTAR can be used by raters to score student work samples in a consistent manner.
► We designed an analytic rubric to measure critical thinking skills and disposition.
► The intra-rater reliability indices were ≥0.70 in 5 of the 6 rubric categories.
► The inter-rater reliability indices were ≥0.70 for all rubric categories.
Journal: Assessing Writing - Volume 17, Issue 4, October 2012, Pages 251–270