Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
526451 | Computer Vision and Image Understanding | 2007 | 19 Pages |
Abstract
In this paper, we review the major approaches to multimodal human–computer interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for multimodal human–computer interaction (MMHCI) research.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Alejandro Jaimes, Nicu Sebe,