Article ID Journal Published Year Pages File Type
6260843 Current Opinion in Behavioral Sciences 2015 8 Pages PDF
Abstract

•State spaces in reinforcement learning must be learned through experience.•Latent cause models are a framework for learning structure in the environment.•The theory predicts that gradual but not abrupt extinction of fear will be effective.•It also explains why predictions of concurrent cues sometimes summate but not always.•Structure learning may function as a core computational system shared across domains.

Effective reinforcement learning hinges on having an appropriate state representation. But where does this representation come from? We argue that the brain discovers state representations by trying to infer the latent causal structure of the task at hand, and assigning each latent cause to a separate state. In this paper, we review several implications of this latent cause framework, with a focus on Pavlovian conditioning. The framework suggests that conditioning is not the acquisition of associations between cues and outcomes, but rather the acquisition of associations between latent causes and observable stimuli. A latent cause interpretation of conditioning enables us to begin answering questions that have frustrated classical theories: Why do extinguished responses sometimes return? Why do stimuli presented in compound sometimes summate and sometimes do not? Beyond conditioning, the principles of latent causal inference may provide a general theory of structure learning across cognitive domains.

Related Topics
Life Sciences Neuroscience Behavioral Neuroscience
Authors
, , ,