| Article ID | Journal | Published Year | Pages | File Type | 
|---|---|---|---|---|
| 6937770 | Image and Vision Computing | 2017 | 30 Pages | 
Abstract
												Visual odometry using only a monocular camera faces more algorithmic challenges than stereo odometry. We present a robust monocular visual odometry framework for automotive applications. An extended propagation-based tracking framework is proposed which yields highly accurate (unscaled) pose estimates. Scale is supplied by ground plane pose estimation employing street pixel labeling using a convolutional neural network (CNN). The proposed framework has been extensively tested on the KITTI dataset and achieves a higher rank than current published state-of-the-art monocular methods in the KITTI odometry benchmark. Unlike other VO/SLAM methods, this result is achieved without loop closing mechanism, without RANSAC and also without multiframe bundle adjustment. Thus, we challenge the common belief that robust systems can only be built using iterative robustification tools like RANSAC.
											Keywords
												
											Related Topics
												
													Physical Sciences and Engineering
													Computer Science
													Computer Vision and Pattern Recognition
												
											Authors
												Nolang Fanani, Alina Stürck, Matthias Ochs, Henry Bradler, Rudolf Mester, 
											