کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
565286 1452035 2014 14 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Emotion in the voice influences the way we scan emotional faces
ترجمه فارسی عنوان
احساس در صدا تاثیر می گذارد که ما چهره های عاطفی را اسکن می کنیم
کلمات کلیدی
سخنرانی - گفتار، پرونده، صورت، ردیابی چشم، هیجانی، متقابل
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر پردازش سیگنال
چکیده انگلیسی


• We investigated whether emotional speech prosody influences emotional face scanning.
• Results confirm effects of prosody emotional congruency on eye movements.
• Vocal emotion cues could guide how humans process facial expressions.

Previous eye-tracking studies have found that listening to emotionally-inflected utterances guides visual behavior towards an emotionally congruent face (e.g., Rigoulot and Pell, 2012). Here, we investigated in more detail whether emotional speech prosody influences how participants scan and fixate specific features of an emotional face that is congruent or incongruent with the prosody. Twenty-one participants viewed individual faces expressing fear, sadness, disgust, or happiness while listening to an emotionally-inflected pseudo-utterance spoken in a congruent or incongruent prosody. Participants judged whether the emotional meaning of the face and voice were the same or different (match/mismatch). Results confirm that there were significant effects of prosody congruency on eye movements when participants scanned a face, although these varied by emotion type; a matching prosody promoted more frequent looks to the upper part of fear and sad facial expressions, whereas visual attention to upper and lower regions of happy (and to some extent disgust) faces was more evenly distributed. These data suggest ways that vocal emotion cues guide how humans process facial expressions in a way that could facilitate recognition of salient visual cues, to arrive at a holistic impression of intended meanings during interpersonal events.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Speech Communication - Volume 65, November–December 2014, Pages 36–49
نویسندگان
, ,