Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
493215 | Procedia Technology | 2013 | 8 Pages |
Today, the manipulation of objects by mobile robots is still a challenging task. This task is commonly decomposed on three stages: a) approaching to the objects, b) path planning and trajectory execution of the manipulator arm and finally c) fine tuning and grasping. In this work is presented an implementation of a 3D pose visual servoing for an autonomous mobile manipulator dealing with the last stage of the manipulation task (fine tuning and grasping). The methodology proposed consists of three steps: a) beginning with a fast monocular image segmentation, followed by b) 3D model reconstruction and finally c) pose estimation to feedback fine tuning manipulation control loop. Objects and end-effector are colored in different colors and their models are supposed to be known. Our mobile manipulator prototype consists of a stereo camera under a binocular stand-alone configuration and an anthropomorphic 7DoF arm with a parallel end-effector (gripper). Our methodology runs in real time and is suitable to perform continuous visual servoing. Experimental results are reported.