کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
412345 | 679627 | 2014 | 12 صفحه PDF | دانلود رایگان |
• Autonomous online learning of a representation of the robot workspace.
• Locations in space are encoded using gaze-centered motor coordinates.
• The robot is able to estimate the Reachability of visually detected objects.
• The robot can modify its body configuration to improve the quality of arm reaching.
• Overall, we realized a form of intelligent whole-body reaching in a humanoid robot.
We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: (i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and (ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).
Journal: Robotics and Autonomous Systems - Volume 62, Issue 4, April 2014, Pages 556–567