کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
351662 | 618474 | 2012 | 15 صفحه PDF | دانلود رایگان |

This paper reports a cross-validation study aimed at identifying reliable and valid assessment methods and technologies for natural language (i.e., written text) responses to complex problem-solving scenarios. In order to investigate current assessment technologies for text-based responses to problem-solving scenarios (i.e., ALA-Reader and T-MITOCAR), this study compared the two best developed technologies to an alternative methodology. Comparisons amongst the three models (benchmark, ALA-Reader, and T-MITOCAR) provided two findings: (a) the benchmark model created the most descriptive concept maps; and (b) the ALA-Reader model had a higher correlation with the benchmark model than did T-MITOCAR’s. The results imply that the benchmark model is a viable alternative to the two existing technologies and is worth exploring in a larger scale study.
► We validated three assessment technologies for natural language responses to complex problems.
► We propose a benchmark method in comparison with current technologies involving ALA-Reader and T-MITOCAR.
► The benchmark model creates the most descriptive concept maps.
► The ALA-Reader model had higher correlation with the benchmark model than did T-MITOCAR.
► The benchmark model is a viable alternative to two existing technologies.
Journal: Computers in Human Behavior - Volume 28, Issue 2, March 2012, Pages 703–717