Article ID Journal Published Year Pages File Type
4321192 Neuron 2014 13 Pages PDF
Abstract

•OFC may encode the current abstract state of a task for reinforcement learning•Diverse inputs to OFC allow states to include information that may not be observable•State information is used for both model-based and model-free reinforcement learning

SummaryOrbitofrontal cortex (OFC) has long been known to play an important role in decision making. However, the exact nature of that role has remained elusive. Here, we propose a unifying theory of OFC function. We hypothesize that OFC provides an abstraction of currently available information in the form of a labeling of the current task state, which is used for reinforcement learning (RL) elsewhere in the brain. This function is especially critical when task states include unobservable information, for instance, from working memory. We use this framework to explain classic findings in reversal learning, delayed alternation, extinction, and devaluation as well as more recent findings showing the effect of OFC lesions on the firing of dopaminergic neurons in ventral tegmental area (VTA) in rodents performing an RL task. In addition, we generate a number of testable experimental predictions that can distinguish our theory from other accounts of OFC function.

Related Topics
Life Sciences Neuroscience Cellular and Molecular Neuroscience
Authors
, , , ,