Article ID Journal Published Year Pages File Type
413356 Robotics and Autonomous Systems 2015 16 Pages PDF
Abstract

•We propose a fully unsupervised algorithm for perception.•The algorithm processes high-dimensional, multimodal input.•The output is a symbolic representation along with continuous traits.•We apply it on a robotic task involving vision, proprioception and speech.

In this paper, we tackle the problem of multimodal learning for autonomous robots. Autonomous robots interacting with humans in an evolving environment need the ability to acquire knowledge from their multiple perceptual channels in an unsupervised way. Most of the approaches in the literature exploit engineered methods to process each perceptual modality. In contrast, robots should be able to acquire their own features from the raw sensors, leveraging the information elicited by interaction with their environment: learning from their sensorimotor experience would result in a more efficient strategy in a life-long perspective. To this end, we propose an architecture based on deep networks, which is used by the humanoid robot iCub to learn a task from multiple perceptual modalities (proprioception, vision, audition). By structuring high-dimensional, multimodal information into a set of distinct sub-manifolds in a fully unsupervised way, it performs a substantial dimensionality reduction by providing both a symbolic representation of data and a fine discrimination between two similar stimuli. Moreover, the proposed network is able to exploit multimodal correlations to improve the representation of each modality alone.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,