Article ID Journal Published Year Pages File Type
6866409 Neurocomputing 2014 16 Pages PDF
Abstract
Visual simultaneous localization and mapping (VSLAM) is becoming increasingly popular in research and industry as a solution for mapping an unknown environment with moving cameras. However, classic methods such as the Extended Kalman Filter (EKF)-based VSLAM have two significant limitations: First, their robustness and accuracy drop dramatically when low frame rate cameras are used or sudden changes in camera velocity occur. Second, their dynamic models are expensive to build, or are too simple to simulate complex movements. In this paper, a novel VSLAM approach called conditional simultaneous localization and mapping (C-SLAM) is proposed in which camera state transition is derived from image data using optical flow constraints and epipolar geometry in the prediction stage. This improvement not only increases prediction accuracy but also replaces commonly used predefined dynamic models which require additional computation. Compared to classic VSLAM approaches, C-SLAM performs more accurately in prediction and has high computational efficiency, especially under conditions such as abrupt changes in camera velocity or low camera frame rate. Such advantages are supported by the experimental results and analysis presented in this paper.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,