Article ID Journal Published Year Pages File Type
870969 IRBM 2014 10 Pages PDF
Abstract

Several approaches have been proposed to recognize human emotions based on facial expressions or physiological signals, relatively rare work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. In this paper, we propose two methods based on feature-level and decision-level to fuse facial and physiological modalities. At feature-level fusion, we have tested the mutual information approach for selecting the most relevant and principal component analysis to reduce their dimensionality. For decision-level fusion, we have implemented two methods; the first is based on voting process and the second is based on dynamic Bayesian Networks. The system is validated using data obtained through an emotion elicitation experiment based on the International Affective Picture System. Results show that feature-level fusion is better than decision-level fusion.

Related Topics
Physical Sciences and Engineering Engineering Biomedical Engineering
Authors
, , ,