Article ID Journal Published Year Pages File Type
411577 Neurocomputing 2016 7 Pages PDF
Abstract

In the field of Learning Automata (LA), how to design faster learning algorithms has always been a key issue. Among solutions reported in the literature, the stochastic estimator reward-inaction learning automaton (SERI), which belongs to the Maximum Likelihood estimator based LAs, has been recognized as the fastest ϵ-optimal LA. In this paper, we first point out the limitations of the traditional Maximum Likelihood Estimator (MLE) based LAs and then introduce Bayesian estimator based approach, which is demonstrated to be equivalent to Laplace smoothing of the traditional method, to overcome these limitations. The key idea is that the Bayesian estimator, which estimates the probability of selecting each action in the LA, aims to reconstruct Bernoulli distribution from sequential data, and is formalized based on exponential conjugate family so that the LA has a relatively simple format for easy implementation. In addition, we also indicate that this Bayesian estimator could be applied to update almost all existing MLE estimator based LAs. Based on the proposed Bayesian estimator, a new LA, known as Generalized Bayesian Stochastic Estimator (GBSE) LA, is presented and proved to be ϵ-optimal. Finally, extensive experimental results on benchmarks demonstrate that our proposed learning scheme is more efficient than the current best LA SERI.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , ,