کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
566074 | 875927 | 2011 | 14 صفحه PDF | دانلود رایگان |

Classification of emotional content of short Finnish emotional [a:] vowel speech samples is performed using vocal source parameter and traditional intonation contour parameter derived prosodic features. A multiple kNN classifier based decision level fusion classification architecture is proposed for multimodal speech prosody and vocal source expert fusion. The sum fusion rule and the sequential forward floating search (SFFS) algorithm are used to produce leveraged expert classifiers. Automatic classification tests in five emotional classes demonstrate that significantly higher than random level emotional content classification performance is achievable using both prosodic and vocal source features. The fusion classification approach is further shown to be capable of emotional content classification in the vowel domain approaching the performance level of the human reference.
Figure optionsDownload as PowerPoint slideResearch highlights
► Multi-class emotion classification is possible using vowel-length samples.
► Voice quality features contain useful emotional information.
► Decision level fusion increases the robustness of emotion classification.
Journal: Speech Communication - Volume 53, Issue 3, March 2011, Pages 269–282