Article ID Journal Published Year Pages File Type
8838155 Current Opinion in Behavioral Sciences 2018 6 Pages PDF
Abstract
Model-free (MF) reinforcement learning (RL) algorithms account for a wealth of neuroscientific and behavioral data pertinent to habits; however, conspicuous disparities between model-predicted response patterns and experimental data have exposed the inadequacy of MF-RL to fully capture the domain of habitual behavior. We review several extensions to generic MF-RL algorithms that could narrow the gap between theory and empirical data. We discuss insights gained from extending RL algorithms to operate in complex environments with multidimensional continuous state spaces. We also review recent advances in hierarchical RL and their potential relevance to habits. Neurobiological evidence suggests that similar mechanisms for habitual learning and control may apply across diverse psychological domains.
Related Topics
Life Sciences Neuroscience Behavioral Neuroscience
Authors
, , , , ,