Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
5002748 | IFAC-PapersOnLine | 2016 | 5 Pages |
Abstract
In this study, we propose a novel method that enables robots to autonomously form place concepts by using hierarchical Multimodal Latent Dirichlet Allocation (hMLDA) based on position and vision information. Generally, Simultaneous Localization and Mapping (SLAM) is used to identify the self-position of a robot on a metric map. In contrast, human beings frequently use place concepts that are defined by the name of a place and its spatial extent such as ”kitchen,” ”meeting space,” and ”in front of the TV.” The realization of human-life support by robots would require robots to learn place concepts such as these and to use them to collaborate with human beings. The proposed method enables robots to autonomously form hierarchical place concepts by using hMLDA, which stochastically integrates position information obtained by Monte Carlo Localization (MCL) and vision information obtained by a Convolutional Neural Network (CNN). Evaluation experiments using a robot in a real environment demonstrated the applicability of the hierarchical place concept formed by the proposed method.
Related Topics
Physical Sciences and Engineering
Engineering
Computational Mechanics
Authors
Yoshinobu Hagiwara, Inoue Masakazu, Taniguch Tadahiro,