Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6845099 | Learning and Individual Differences | 2014 | 7 Pages |
Abstract
Evaluating individual creativity is an important challenge in creativity research. We developed a training module for non-expert judges in which participants learned the definitions of components of creativity and received expert feedback in an interactive creativity judgment exercise. We aimed to test whether and how the training module would increase the reliability and validity of non-expert ratings. Study 1 (NÂ =Â 79) showed that the training had a positive effect on the test-retest reliability and validity of creativity ratings. Study 2 (NÂ =Â 126) replicated the results on test-retest reliability and validity but with low absolute values, indicating that trained participants cannot substitute experts. In addition, Study 2 showed that the effect of the training module on the validity of creativity ratings was mediated by increased validity of ratings of novelty and elaboration. The results are discussed in terms of theoretical and practical relevance.
Keywords
Related Topics
Social Sciences and Humanities
Psychology
Developmental and Educational Psychology
Authors
Martin Storme, Nils Myszkowski, Pinar Ãelik, Todd Lubart,