Article ID Journal Published Year Pages File Type
712573 IFAC Proceedings Volumes 2006 6 Pages PDF
Abstract

The most natural approach for a robot to learn about a new object is the short presentation of the object either by hand or on a table. The robot should learn a model of the object and use it to later find the object again and track it. All these steps should be executed autonomously. One of our long-term goals is the usage of our system for robot grasping tasks. Following this approach we developed a method of extracting the object model using a depth image acquired through the scan of the object. The model extracted is subsequently exploited for detecting the object in the environment. After detection the approach automatically initializes a tracking method that follows the object motion to enable grasping or navigation tasks. The autonomous execution is made possible by the integration of depth and appearance data using a laser-based depth camera and a color camera. The use of both depth and colour images makes the approach robust to illumination changes and different conditions during learning the object model and then later re-detecting it. Experiments show the feasibility of the concept even in situations where the object is partially occluded and the scene is cluttered.

Related Topics
Physical Sciences and Engineering Engineering Computational Mechanics
Authors
, , ,