کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
412484 | 679645 | 2012 | 13 صفحه PDF | دانلود رایگان |
The current work addresses the problem of 3D model tracking in the context of monocular and stereo omnidirectional vision in order to estimate the camera pose. To this end, we track 3D objects modeled by line segments because the straight line feature is often used to model the environment. Indeed, we are interested in mobile robot navigation using omnidirectional vision in structured environments. In the case of omnidirectional vision, 3D straight lines are projected as conics in omnidirectional images. Under certain conditions, these conics may have singularities.In this paper, we present two contributions. We, first, propose a new spherical formulation of the pose estimation withdrawing singularities, using an object model composed of lines. The theoretical formulation and the validation on synthetic images thus show that the new formulation clearly outperforms the former image plane one. The second contribution is the extension of the spherical representation to the stereovision case. We consider in the paper a sensor which combines a camera and four mirrors. Results in various situations show the robustness to illumination changes and local mistracking. As a final result, the proposed new stereo spherical formulation allows us to localize online a robot indoor and outdoor whereas the classical formulation fails.
► We tackle the model based tracking issue in omnidirectional vision and stereovision.
► We propose a new spherical formulation of the problem.
► The new formulation is compared to a classical image plane formulation.
► The spherical formulation increases the convergence rate and pose precision.
► The extension to an unusual omnidirectional stereovision sensor increases robustness.
Journal: Robotics and Autonomous Systems - Volume 60, Issue 8, August 2012, Pages 1056–1068