Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6960665 | Speech Communication | 2018 | 38 Pages |
Abstract
Finding an appropriate feature representation for audio data is central to speech emotion recognition. Most existing audio features rely on hand-crafted feature encoding techniques, such as the AVEC challenge feature set. An alternative approach is to use features that are learned automatically. This has the advantage of generalizing well to new data, particularly if the features are learned in an unsupervised manner with less restrictions on the data itself. In this work, we adopt the sparse coding framework as a means to automatically represent features from audio and propose a hierarchical sparse coding (HSC) scheme. Experimental results indicate that the obtained features, in an unsupervised fashion, are able to capture useful properties of the speech that distinguish between emotions.
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Diana Torres-Boza, Meshia Cédric Oveneke, Fengna Wang, Dongmei Jiang, Werner Verhelst, Hichem Sahli,