Article ID Journal Published Year Pages File Type
496765 Applied Soft Computing 2011 13 Pages PDF
Abstract

Reinforcement learning (RL) is one of the machine intelligence techniques with several characteristics that make it suitable for solving real-world problems. However, RL agents generally face a very large state space in many applications. They must take actions in every state many times to find the optimal policy. In this work, a special type of knowledge about actions is employed to improve the performance of the off-policy, incremental, and model-free reinforcement learning with discrete state and action space. One of the components of RL agent is the action. For each action, its associate opposite action is defined. The actions and opposite actions are implemented in the framework of reinforcement learning to update the value function resulting in a faster convergence. The effects of opposite action on some of the reinforcement learning algorithms are investigated.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
,