Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6267286 | Current Opinion in Neurobiology | 2012 | 7 Pages |
The reward prediction error (RPE) theory of dopamine (DA) function has enjoyed great success in the neuroscience of learning and decision-making. This theory is derived from model-free reinforcement learning (RL), in which choices are made simply on the basis of previously realized rewards. Recently, attention has turned to correlates of more flexible, albeit computationally complex, model-based methods in the brain. These methods are distinguished from model-free learning by their evaluation of candidate actions using expected future outcomes according to a world model. Puzzlingly, signatures from these computations seem to be pervasive in the very same regions previously thought to support model-free learning. Here, we review recent behavioral and neural evidence about these two systems, in attempt to reconcile their enigmatic cohabitation in the brain.
⺠Model-free RL is a successful theory of cortico-striatal DA function. ⺠Flexible model-based RL methods offer to enrich understanding of brain and behavior. ⺠Data suggest extensive overlap between putative neural correlates of these RL systems.