Article ID Journal Published Year Pages File Type
411654 Robotics and Autonomous Systems 2010 11 Pages PDF
Abstract

Two relevant issues in vision-based navigation are the field-of-view constraints of conventional cameras and the model and structure dependency of standard approaches. A good solution of these problems is the use of the homography model with omnidirectional vision. However, a plane of the scene will cover only a small part of the omnidirectional image, missing relevant information across the wide range field of view, which is the main advantage of omnidirectional sensors. The interest of this paper is in a new approach for computing multiple homographies from virtual planes using omnidirectional images and its application in an omnidirectional vision-based homing control scheme. The multiple homographies are robustly computed, from a set of point matches across two omnidirectional views, using a method that relies on virtual planes independently of the structure of the scene. The method takes advantage of the planar motion constraint of the platform and computes virtual vertical planes from the scene. The family of homographies is also constrained to be embedded in a three-dimensional linear subspace to improve numerical consistency. Simulations and real experiments are provided to evaluate our approach.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,