Article ID Journal Published Year Pages File Type
413292 Robotics and Autonomous Systems 2010 10 Pages PDF
Abstract

One of the main problems of robots is the lack of adaptability and the need for adjustment every time the robot changes its working place. To solve this, we propose a learning approach for mobile robots using a reinforcement-based strategy and a dynamic sensor-state mapping. This strategy, practically parameterless, minimises the adjustments needed when the robot operates in a different environment or performs a different task.Our system will simultaneously learn the state space and the action to execute on each state. The learning algorithm will attempt to maximise the time before a robot failure in order to obtain a control policy suited to the desired behaviour, thus providing a more interpretable learning process. The state representation will be created dynamically, starting with an empty state space and adding new states as the robot finds new situations that has not seen before. A dynamic creation of the state representation will avoid the classic, error-prone and cyclic process of designing and testing an ad hoc representation. We performed an exhaustive study of our approach, comparing it with other classic strategies. Unexpectedly, learning both perception and action does not increase the learning time.

Research highlights► Robots need to adapt to their workplace and learn from their past experiences. ► Adaptation requires simultaneous learning of how to perceive and how to act. ► Autonomous learning needs parameterless strategies. ► I_Tbf: new strategy able to predict when a robot mistake will occur. ► The control policy is iterated to increase the time before a robot failure.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,