Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
10457939 | Cognition | 2011 | 9 Pages |
Abstract
A theoretical debate in artificial grammar learning (AGL) regards the learnability of hierarchical structures. Recent studies using an AnBn grammar draw conflicting conclusions (Bahlmann and Friederici, 2006, De Vries et al., 2008). We argue that 2 conditions crucially affect learning AnBn structures: sufficient exposure to zero-level-of-embedding (0-LoE) exemplars and a staged-input. In 2 AGL experiments, learning was observed only when the training set was staged and contained 0-LoE exemplars. Our results might help understanding how natural complex structures are learned from exemplars.
Related Topics
Life Sciences
Neuroscience
Cognitive Neuroscience
Authors
Jun Lai, Fenna H. Poletiek,