Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4973694 | Computer Speech & Language | 2017 | 16 Pages |
Abstract
This paper investigates the use of multi-distribution deep neural networks (MD-DNNs) for automatic intonation classification in second-language (L2) English speech. If a classified intonation is different from the target one, we consider that mispronunciation is detected and appropriate diagnostic feedback can be provided thereafter. To transcribe speech data for intonation classification, we propose the RULF labels which are used to transcribe an intonation as rising, upper, lower or falling. These four types of labels can be further merged into two groups - rising and falling. Based on the annotated data from 100 Mandarin and 100 Cantonese learners, we develop an intonation classifier, which considers only 8 frames (i.e., 80Â ms) of pitch value prior to the end of the pitch contour over an intonational phrase (IP). This classifier determines the intonation of L2 English speech as either rising or falling with an accuracy of 93.0%.
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Kun Li, Xixin Wu, Helen Meng,