Article ID Journal Published Year Pages File Type
558211 Computer Speech & Language 2016 20 Pages PDF
Abstract

•Conversion of silent articulation captured by ultrasound and video to modal speech.•Comparison of GMM and full-covariance phonetic HMM without vocabulary limitation.•HMM-based approach allows the use of linguistic information for regularization.•Objective evaluation showed a lower but more fluctuant spectral distortion for HMM.•Perceptual evaluation showed a better intelligibility for HMM on consonants.

This article investigates the use of statistical mapping techniques for the conversion of articulatory movements into audible speech with no restriction on the vocabulary, in the context of a silent speech interface driven by ultrasound and video imaging. As a baseline, we first evaluated the GMM-based mapping considering dynamic features, proposed by Toda et al. (2007) for voice conversion. Then, we proposed a ‘phonetically-informed’ version of this technique, based on full-covariance HMM. This approach aims (1) at modeling explicitly the articulatory timing for each phonetic class, and (2) at exploiting linguistic knowledge to regularize the problem of silent speech conversion. Both techniques were compared on continuous speech, for two French speakers (one male, one female). For modal speech, the HMM-based technique showed a lower spectral distortion (objective evaluation). However, perceptual tests (transcription and XAB discrimination tests) showed a better intelligibility of the GMM-based technique, probably related to its less fluctuant quality. For silent speech, a perceptual identification test revealed a better segmental intelligibility for the HMM-based technique on consonants.

Keywords
Related Topics
Physical Sciences and Engineering Computer Science Signal Processing
Authors
, ,