Article ID Journal Published Year Pages File Type
402394 Knowledge-Based Systems 2013 14 Pages PDF
Abstract

In the family of Learning Classifier Systems, the classifier system XCS is most widely used and investigated. However, the standard XCS has difficulties solving large multi-step problems, where long action chains are needed to get delayed rewards. Up to the present, the reinforcement learning technique in XCS has been based on Q-learning, which optimizes the discounted total reward received by an agent but tends to limit the length of action chains. However, there are some undiscounted reinforcement learning methods available, such as R-learning and average reward reinforcement learning in general, which optimize the average reward per time step. In this paper, R-learning is used as the reinforcement learning employed by XCS, to replace Q-learning. The modification results in a classifier system that is rapid and able to solve large maze problems. In addition, it produces uniformly spaced payoff levels, which can support long action chains and thus effectively prevent the occurrence of overgeneralization.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,