Article ID Journal Published Year Pages File Type
6837775 Computers in Human Behavior 2016 10 Pages PDF
Abstract
Although learners' judgments of their own learning are crucial for self-regulated study, judgment accuracy tends to be low. To increase accuracy, we had participants make combined judgments. In Experiment 1, 247 participants studied a ten-chapter expository text. In the simple judgments group, participants after each chapter rated the likelihood of answering correctly a knowledge question on that chapter (judgment of learning; JOL). In the combined judgments group, participants rated text difficulty before making a JOL. No accuracy differences emerged between groups, but a comparison of early-chapter and late-chapter judgment magnitudes showed that the judgment manipulation had induced cognitive processing differences. In Experiment 2, we therefore manipulated judgment scope. Rather than predicting answers correct for an entire chapter, another 256 participants rated after each chapter the likelihood of answering correctly a question on a specific concept from that chapter. Both judgment accuracy and knowledge test scores were higher in the combined judgments group. Moreover, while judgment accuracy dropped to an insignificant level between early and late chapters in the simple judgments group, accuracy remained constant with combined judgments. We discuss implications for research into metacomprehension processes in computer-supported learning and for adaptive learner support based on judgment prompts.
Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, ,