Article ID Journal Published Year Pages File Type
4321827 Neuron 2010 11 Pages PDF
Abstract

SummaryReinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior.

► Distinct neural signatures for model-based and model-free reinforcement learning ► Neural correlate of state prediction error (SPE) in the IPS and lateral PFC ► Neural correlate of reward prediction error (RPE) in the ventral striatum ► Model-based learning via an SPE is also present in the absence of reward information

Related Topics
Life Sciences Neuroscience Cellular and Molecular Neuroscience
Authors
, , , ,