Article ID Journal Published Year Pages File Type
10327075 Robotics and Autonomous Systems 2005 12 Pages PDF
Abstract
When humans explain a task to be executed by a robot they decompose it into chunks of actions. These form a chain of search-and-act sensory-motor loops that exit when a condition is met. In this paper we investigate the nature of these chunks in an urban visual navigation context, and propose a method for implementing the corresponding robot primitives such as “take the nth turn right/left”. These primitives make use of a “short-lived” internal map updated as the robot moves along. The recognition and localisation of intersections is done in the map using task-guided template matching. This approach takes advantage of the content of human instructions to save computation time and improve robustness.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,