Article ID Journal Published Year Pages File Type
569002 Speech Communication 2006 18 Pages PDF
Abstract

In this paper, we present a state-space formulation of a neural-network-based hidden dynamic model of speech whose parameters are trained using an approximate EM algorithm. This efficient and effective training makes use of the output of an off-the-shelf formant tracker (for the vowel segments of the speech signal), in addition to the Mel-cepstral observations, to simplify the complex sufficient statistics that would be required in the exact EM algorithm. The trained model, consisting of the state equation for the target-directed vocal tract resonance (VTR) dynamics on all classes of speech sounds (including consonant closure and constriction) and the observation equation for mapping from the VTR to Mel-cepstral acoustic measurement, is then used to recover the unobserved VTR based on the extended Kalman filter. The results demonstrate accurate estimation of the VTR, especially during rapid consonant–vowel or vowel–consonant transitions and during consonant closure when the acoustic measurement alone provides weak or no information to infer the VTR values. The practical significance of correctly identifying the VTRs during consonantal closure or constriction is that they provide target frequency values for the VTR or formant transitions from adjacent sounds. Without such target values, the VTR transitions from vowel to consonant or from consonant to vowel are often very difficult to extract accurately by the previous formant tracking techniques. With the use of the new technique reported in this paper, the consonantal VTRs and the related transitions become more reliably identified from the speech signal.

Related Topics
Physical Sciences and Engineering Computer Science Signal Processing
Authors
, ,