Article ID Journal Published Year Pages File Type
4948908 Robotics and Autonomous Systems 2016 55 Pages PDF
Abstract
In this paper, we address the problem of visual simultaneous localization and mapping (VSLAM) using a single camera as the sole sensor. A VSLAM system estimates its position and orientation (pose) by tracking distinct landmarks in the environment using its camera. Most approaches detect feature points in the environment, using an interest point operator that looks for small textured image templates. Existing algorithms typically assume that an image template is the projection of a single planar surface patch. However, if the template is actually the projection of a nonplanar surface, tracking will eventually fail. We present an algorithm that estimates the 3D structure of a nonplanar template as it is tracked through a sequence of images. Using the 3D structure, the algorithm can more accurately predict the appearance of the template, and as a result, is better able to track the template. We then evaluate the benefit of using the new feature tracking method in VSLAM, and demonstrate that the new algorithm can track points longer, and achieve better accuracy, than if the standard single-plane feature tracking method is used. The approach is especially effective in scenes where surface discontinuities are common.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,