Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4946668 | Neural Networks | 2017 | 9 Pages |
Abstract
Speech Emotion Recognition (SER) can be regarded as a static or dynamic classification problem, which makes SER an excellent test bed for investigating and comparing various deep learning architectures. We describe a frame-based formulation to SER that relies on minimal speech processing and end-to-end deep learning to model intra-utterance dynamics. We use the proposed SER system to empirically explore feed-forward and recurrent neural network architectures and their variants. Experiments conducted illuminate the advantages and limitations of these architectures in paralinguistic speech recognition and emotion recognition in particular. As a result of our exploration, we report state-of-the-art results on the IEMOCAP database for speaker-independent SER and present quantitative and qualitative assessments of the models' performances.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Haytham M. Fayek, Margaret Lech, Lawrence Cavedon,