Article ID Journal Published Year Pages File Type
6267255 Current Opinion in Neurobiology 2012 7 Pages PDF
Abstract

The hierarchical structure of human and animal behavior has been of critical interest in neuroscience for many years. Yet understanding the neural processes that give rise to such structure remains an open challenge. In recent research, a new perspective on hierarchical behavior has begun to take shape, inspired by ideas from machine learning, and in particular the framework of hierarchical reinforcement learning. Hierarchical reinforcement learning builds on traditional reinforcement learning mechanisms, extending them to accommodate temporally extended behaviors or subroutines. The resulting computational paradigm has begun to influence both theoretical and empirical work in neuroscience, conceptually aligning the study of hierarchical behavior with research on other aspects of learning and decision making, and giving rise to some thought-provoking new findings.

► Reinforcement learning models in neuroscience face a challenge in accounting for learning and decision making in complex tasks. ► Recent research has begun to import ideas from hierarchical reinforcement learning, a computational paradigm that leverages task-subtask hierarchies to cope with large-scale problems. ► Hierarchical reinforcement learning has given rise to new interpretations of established findings, and inspired prospective tests of some novel predictions. ► An open challenge, highlighted by hierarchical reinforcement learning, is to understand how useful subgoals are identified.

Related Topics
Life Sciences Neuroscience Neuroscience (General)
Authors
,