کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
413356 680437 2015 16 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Deep unsupervised network for multimodal perception, representation and classification
ترجمه فارسی عنوان
شبکه عمیق کنترل نشده برای ادراک، نمایندگی و طبقه بندی چند منظوره
کلمات کلیدی
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی


• We propose a fully unsupervised algorithm for perception.
• The algorithm processes high-dimensional, multimodal input.
• The output is a symbolic representation along with continuous traits.
• We apply it on a robotic task involving vision, proprioception and speech.

In this paper, we tackle the problem of multimodal learning for autonomous robots. Autonomous robots interacting with humans in an evolving environment need the ability to acquire knowledge from their multiple perceptual channels in an unsupervised way. Most of the approaches in the literature exploit engineered methods to process each perceptual modality. In contrast, robots should be able to acquire their own features from the raw sensors, leveraging the information elicited by interaction with their environment: learning from their sensorimotor experience would result in a more efficient strategy in a life-long perspective. To this end, we propose an architecture based on deep networks, which is used by the humanoid robot iCub to learn a task from multiple perceptual modalities (proprioception, vision, audition). By structuring high-dimensional, multimodal information into a set of distinct sub-manifolds in a fully unsupervised way, it performs a substantial dimensionality reduction by providing both a symbolic representation of data and a fine discrimination between two similar stimuli. Moreover, the proposed network is able to exploit multimodal correlations to improve the representation of each modality alone.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Robotics and Autonomous Systems - Volume 71, September 2015, Pages 83–98
نویسندگان
, , ,