Article ID Journal Published Year Pages File Type
413451 Robotics and Autonomous Systems 2011 13 Pages PDF
Abstract

When describing robot motion with dynamic movement primitives (DMPs), goal (trajectory endpoint), shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are predefined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because learning of both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and direct policy search methods for shape learning. Specifically, we use “policy improvement with path integrals” and “natural actor critic” for the policy search. We solve a learning-to-pour-liquid task in simulations as well as using a Pa10 robot arm. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination of goal and shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of disturbances, which makes this combined method suitable for robotic applications.

► Combination of goal and shape learning for dynamic movement primitives. ► Combination of value function approximation and direct policy search methods. ► Comparison of direct policy search methods.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,