Article ID Journal Published Year Pages File Type
405613 Neural Networks 2009 8 Pages PDF
Abstract

Animals increase or decrease their future tendency of emitting an action based on whether performing such action has, in the past, resulted in positive or negative reinforcement. An analysis in the companion paper [Zhang, J. (2009). Adaptive learning via selectionism and Bayesianism. Part I: Connection between the two. Neural Networks, 22(3), 220–228] of such selectionist style of learning reveals a resemblance between its ensemble-level dynamics governing the change of action probability and Bayesian learning where evidence (in this case, reward) is distributively applied to all action alternatives. Here, this equivalence is further explored in solving the temporal credit-assignment problem during the learning of an action sequence (“operant chain”). Naturally emerging are the notion of secondary (conditioned) reinforcement predicting the average reward associated with a stimulus, and the notion of actor–critic architecture involving concurrent learning of both action probability and reward prediction. While both are consistent with solutions provided by contemporary reinforcement learning theory ( Sutton & Barto, 1998) for optimizing sequential decision-making under stationary Markov environments, we investigate the effect of action learning on reward prediction when both are carried out concurrently in any on-line scheme.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
,