|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|557737||874776||2016||15 صفحه PDF||ندارد||دانلود رایگان|
• We analyze and leverage commonalities between speech and music in terms of rhythm.
• We develop automatic musical rhythm generation techniques for L2 prosodic training.
• The automatic procedure can be applied to arbitrary English sentences.
• Users may practice by speaking in synchrony with the generated musical rhythm.
• Speech from users after practice better approximates stress-timed rhythm.
Language transfer creates a challenge for Chinese (L1) speakers in acquiring English (L2) rhythm. This appears to be a widely encountered difficulty among foreign learners of English, and is a major obstacle in acquiring a near-native oral proficiency. This paper presents a system named MusicSpeak, which strives to capitalize on musical rhythm for prosodic training in second language acquisition. This is one of the first efforts that develop an automatic procedure which can be applied to arbitrary English sentences, to cast rhythmic patterns in speech into rhythmic patterns in music. Learners can practice by speaking in synchrony with the musical rhythm. Evaluation results suggest that after practice, the learners’ speech generally achieves higher durational variability and better approximates stress-timed rhythm.
Journal: Computer Speech & Language - Volume 37, May 2016, Pages 67–81