کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
382449 | 660763 | 2016 | 11 صفحه PDF | دانلود رایگان |
• Active and interactive method to deduct pose estimation of a fleet of robots.
• Pose estimation between robots based on 3D image alignment.
• Cooperative Pose estimation when direct alignment is not possible.
• Human interaction based on 2D alignment to improve the automatic 3D image alignment.
Given a fleet of autonomous robots performing a cooperative task, such as rescue of people, it is crucial for the robots to share their relative position. If the site has not been explored before, auto localise each robot through landmarks is not possible. Moreover, not always the GPS information is available or it has the desirable accuracy. Our framework is composed of a fleet of robots that have 2D and 3D cameras, a human coordinator and a Human–Machine Interface. 3D-images are used to automatically align them and deduct the relative position between robots. 2D-images are used to reduce the alignment error in an interactive manner. A human visualises both 2D-images and the current automatic alignment and imposes a new alignment through the Human–Machine Interface. Since the information is shared through the whole fleet, robots can deduct the position of other ones that do not visualise the same scene. Practical evaluation shows that in situations that there is a large difference between images, the cooperative and interactive processes are crucial to achieve an acceptable result.
Journal: Expert Systems with Applications - Volume 45, 1 March 2016, Pages 150–160