Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
411900 | Robotics and Autonomous Systems | 2016 | 17 Pages |
•Exploration formulated and solved as partially observable Markov decision process.•New sampling-based approximation for mutual information in mobile robotics.•Efficient algorithm for drawing samples for forward-simulation based planning.•Experimental validation in simulated and real-world exploration domains.•Software available at https://goo.gl/ENGkIf.
We address the problem of controlling a mobile robot to explore a partially known environment. The robot’s objective is the maximization of the amount of information collected about the environment. We formulate the problem as a partially observable Markov decision process (POMDP) with an information-theoretic objective function, and solve it applying forward simulation algorithms with an open-loop approximation. We present a new sample-based approximation for mutual information useful in mobile robotics. The approximation can be seamlessly integrated with forward simulation planning algorithms. We investigate the usefulness of POMDP based planning for exploration, and to alleviate some of its weaknesses propose a combination with frontier based exploration. Experimental results in simulated and real environments show that, depending on the environment, applying POMDP based planning for exploration can improve performance over frontier exploration.