کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6960866 1452005 2017 43 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
A perceptual study of how rapidly and accurately audiovisual cues to utterance-final boundaries can be interpreted in Chinese and English
ترجمه فارسی عنوان
یک مطالعه ادراکی از این که چگونه نشانه های صوتی و تصویری سریع و دقیق به مرزهای نهایی می تواند در چینی و انگلیسی تفسیر شود
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر پردازش سیگنال
چکیده انگلیسی
Speakers and their addressees make use of both auditory and visual features as cues to the end of a speaking turn. Prior work, mostly based on analyses of languages like Dutch and English, has shown that intonational markers such as melodic boundary tones as well as variation in eye gaze behaviour are often exploited to pre-signal the terminal edge of an utterance. However, we still lack knowledge on how such auditory and visual cues relate to each other, and whether the results for Dutch and English also generalize to other languages. This article compares possible audiovisual cues to prosodic boundaries in two typologically different languages, i.e., English and Chinese. A specific paradigm was used to elicit natural stimuli from 16 speakers, evenly distributed over both languages, which were then presented to L1 and L2 observers. They were asked to judge whether a spoken fragment had occurred in utterance-final position or not, measuring both the participants' reaction time and accuracy. Participants were exposed to stimuli in three different formats: audio-only, vision-only or audiovisual. Our most important results are that (1) visual cues were important for boundary perception in both languages; (2) judges from either language group identified boundaries faster and more accurately in English than in Chinese; (3) there is no in-group advantage as observers were equally good in judging finality in their L1 and L2; (4) there are consistent correlations between the measures of reaction time and accuracy (shorter responses correlate with higher accuracy).
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Speech Communication - Volume 95, December 2017, Pages 68-77
نویسندگان
, ,