کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
411268 679508 2016 20 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Visual–inertial navigation for pinpoint planetary landing using scale-based landmark matching
ترجمه فارسی عنوان
ناوبری بصری ساکن سیاره ای برای فرود دقیق با استفاده از مقیاس مبتنی بر تطبیق نقطه عطفی
کلمات کلیدی
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی


• Future space missions require autonomous terrain navigation for pinpoint landing.
• We test a visual–inertial navigation filter that identifies generic mapped landmarks.
• The filter can efficiently predict position and scale of a landmark within the image.
• We demonstrate pinpoint landing performance on a lunar-representative test bench.
• We test the robustness to various parameter changes.

Landing an autonomous spacecraft within 100 m of a mapped target is a navigation challenge in planetary exploration. Vision-based approaches attempt to pair 2D features detected in camera images with 3D mapped landmarks to reach the required precision. This paper presents a vision-aided inertial navigation system for pinpoint planetary landing called LION (Landing Inertial and Optical Navigation). It can fly over any type of terrain, whatever the topography. LION uses measurements from a novel image-to-map matcher in order to update through a tight data fusion scheme the state of an extended Kalman filter propagated with inertial data. The image processing uses the state and covariance predictions from the filter to determine the regions and extraction scales in which to search for non-ambiguous landmarks in the image. The image scale management process operates per landmark and greatly improves the repeatability rate between the map and descent images. A lunar-representative optical test bench called Visilab was also designed in order to test LION. The observability of absolute navigation performances in Visilab is evaluated with a model developed specifically for this purpose. Finally, the system performances are evaluated at a number of altitudes along with its robustness to off-nadir camera angle, illumination changes, a different map generation process and non-planar topography. The error converges to a mean of 4 m and a 3-RMS dispersion of 47 m at 3 km of altitude on the test setup at scale.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Robotics and Autonomous Systems - Volume 78, April 2016, Pages 63–82
نویسندگان
, , , , ,