Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
409334 | Neurocomputing | 2007 | 11 Pages |
Abstract
Binding is one of the most fundamental cognitive functions, how to find the correspondence of sensations between different modalities such as vision and touch. Learning the multimodal representation of the body is supposed to be the first step toward binding since the morphological constraints on sensations during self-body-observation would make the binding problem tractable. In this paper, we address an issue of learning to match the foci of attention in vision and touch through self-body-observation. We propose the cross-anchoring Hebbian learning rule to uniquely associate double-touching and self-occlusion. Experiments with both the computer simulation and a real robot show the validity of the proposed method.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Yuichiro Yoshikawa, Koh Hosoda, Minoru Asada,