Article ID Journal Published Year Pages File Type
4948643 Robotics and Autonomous Systems 2017 14 Pages PDF
Abstract
In order to solve the problem of stability control for biped robots, the concept of stability training is proposed by using a training platform to exert random disturbance with amplitude limitation on robots that are to be trained. In this work, an approach to achieve a posture stabilizing capability based on stability training and reinforcement learning is explored and verified by simulations. An automatic abstraction method for state space is proposed by using the Gauss basis function and inner evaluation indexes to speed up the learning process. Hierarchical structure stabilizer using the Monte Carlo method is designed according to the concept of variable ZMP. Training samples are extracted from the state transition of the stability training process using balance controllers based on the robot dynamic model. The stabilizers are trained with and without applying the automatic abstraction of state space. Then simulation tests of them are conducted under conditions where the training platform exerts amplitude-limited random disturbances on the robot. Also, the influence of the model errors is studied by introducing deviations of the CoM position during the simulation tests. By comparing the simulation results of two learning stabilizers and the model-based balance controller, it is demonstrated that the designed stabilizer can achieve approximate success rate of the ideal model-based balance controller and exert all the driving ability of the robot under the large disturbance condition of ±30° inclination of the platform. Also, the effects of the model error can be overcome by retraining using state transition data with the model error.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,