Article ID Journal Published Year Pages File Type
393635 Information Sciences 2013 16 Pages PDF
Abstract

The paper deals with the problem of learning the control of Multi-Component Robotic Systems (MCRSs) applying Multi-Agent Reinforcement Learning (MARL) algorithms. Modeling Linked MCRS usually leads to over-constrained environments, posing great difficulties for efficient learning with conventional single and multi-agent reinforcement algorithms. In this paper, we propose a hybrid learning algorithm composed of a modified Q-Learning algorithm embedding an Undesired State-Action Prediction (USAP) module trained by a supervised learning approach which learns a model predicting undesired transitions to states breaking physical constraints. The USAP module’s output is used by the Q-Learning algorithm to prevent these undesired transitions, therefore boosting learning efficiency. This hybrid approach is extended to the multi-agent case embedding the USAP module in Distributed Round-Robin Q-Learning (D-RR-QL), which requires very little communications among agents. We present results of computational experiments conducted in the classical multi-agent taxi scheduling task and a hose transportation task. Results show a considerable learning gain both in time and accuracy, compared to the state-of-the-art Distributed Q-Learning approach in the deterministic taxi scheduling task. In the hose transportation task, USAP module introduces a significant improvement in learning convergence speed.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,