Article ID Journal Published Year Pages File Type
351662 Computers in Human Behavior 2012 15 Pages PDF
Abstract

This paper reports a cross-validation study aimed at identifying reliable and valid assessment methods and technologies for natural language (i.e., written text) responses to complex problem-solving scenarios. In order to investigate current assessment technologies for text-based responses to problem-solving scenarios (i.e., ALA-Reader and T-MITOCAR), this study compared the two best developed technologies to an alternative methodology. Comparisons amongst the three models (benchmark, ALA-Reader, and T-MITOCAR) provided two findings: (a) the benchmark model created the most descriptive concept maps; and (b) the ALA-Reader model had a higher correlation with the benchmark model than did T-MITOCAR’s. The results imply that the benchmark model is a viable alternative to the two existing technologies and is worth exploring in a larger scale study.

► We validated three assessment technologies for natural language responses to complex problems. ► We propose a benchmark method in comparison with current technologies involving ALA-Reader and T-MITOCAR. ► The benchmark model creates the most descriptive concept maps. ► The ALA-Reader model had higher correlation with the benchmark model than did T-MITOCAR. ► The benchmark model is a viable alternative to two existing technologies.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
,