Article ID Journal Published Year Pages File Type
10459419 Intelligence 2013 18 Pages PDF
Abstract
Recently published studies on Complex Problem Solving (CPS) suggest that assessments of CPS using multiple complex systems are only moderately related to tests of classical cognitive abilities. Further, CPS assessments show incremental validity beyond tests of other cognitive abilities when predicting relevant outcomes. However, these empirical accounts have relied on single CPS assessment instruments. We do not know whether these findings will generalize to the construct level across different CPS assessment instruments. To answer this question, we tested a sample of N = 339 German university students who completed three CPS assessment instruments based on multiple complex systems (MicroDYN, the Genetics Lab, and MicroFIN) and the matrices subtest of the Intelligence Structure Test as measure of reasoning. Students further reported their school grades. Analyses including latent multitrait-multimethod models provided support for the conceptualization of CPS as a complex cognitive ability. Results indicated that different CPS assessment instruments showed sufficient convergent validity (with a consistency mostly between .50 and .60). In addition, we found evidence for the divergent validity of CPS from reasoning (reasoning predicted two CPS facets, knowledge and control, βKNOW = .49 and βCON = .53, respectively). In the prediction of academic achievement, CPS explained variance in natural science grades after we controlled for reasoning (βCPS = .22), whereas social science grades were not predicted. Our findings suggest that the validity of CPS generalizes across different measurement instruments.
Related Topics
Social Sciences and Humanities Psychology Experimental and Cognitive Psychology
Authors
, , , , , ,