Article ID Journal Published Year Pages File Type
6266554 Current Opinion in Neurobiology 2014 7 Pages PDF
Abstract

•Decision-making involves various uses of internal models, such as reflecting reward structures.•Generalized prediction errors are used to improve value-based decision-making.•Dopamine activity encodes multiplexed signals in reinforcement learning.•Social decision-making is supported by general mechanisms of value-based decision-making.

A fundamental challenge for computational and cognitive neuroscience is to understand how reward-based learning and decision-making are made and how accrued knowledge and internal models of the environment are incorporated. Remarkable progress has been made in the field, guided by the midbrain dopamine reward prediction error hypothesis and the underlying reinforcement learning framework, which does not involve internal models ('model-free'). Recent studies, however, have begun not only to address more complex decision-making processes that are integrated with model-free decision-making, but also to include internal models about environmental reward structures and the minds of other agents, including model-based reinforcement learning and using generalized prediction errors. Even dopamine, a classic model-free signal, may work as multiplexed signals using model-based information and contribute to representational learning of reward structure.

Related Topics
Life Sciences Neuroscience Neuroscience (General)
Authors
,