Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
412360 | Robotics and Autonomous Systems | 2013 | 17 Pages |
•We apply sparse, appearance-based computer vision techniques to laser intensity images.•We show that descriptive features are stable in laser intensity images over a 24 h period outdoors.•We demonstrate that laser-based VO is comparable to stereo-based VO.•Promising visual teach and repeat results are shown (teaching during the day and matching at night).
In an effort to facilitate lighting-invariant exploration, this paper presents an appearance-based approach using 3D scanning laser-rangefinders for two core visual navigation techniques: visual odometry (VO) and visual teach and repeat (VT&R). The key to our method is to convert raw laser intensity data into greyscale camera-like images, in order to apply sparse, appearance-based techniques traditionally used with camera imagery. The novel concept of an image stack is introduced, which is an array of azimuth, elevation, range, and intensity images that are used to generate keypoint measurements and measurement uncertainties. Using this technique, we present the following four experiments. In the first experiment, we explore the stability of a representative keypoint detection/description algorithm on camera and laser intensity images collected over a 24 h period outside. In the second and third experiments, we validate our VO algorithm using real data collected outdoors with two different 3D scanning laser-rangefinders. Lastly, our fourth experiment presents promising preliminary VT&R localization results, where the teaching phase was done during the day and the repeating phase was done at night. These experiments show that it possible to overcome lighting sensitivity encountered with cameras, yet continue to exploit the heritage of the appearance-based visual odometry pipeline.