| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 10370126 | Speech Communication | 2005 | 18 Pages |
Abstract
Pronunciation modeling in automatic speech recognition systems has had mixed results in the past; one likely reason for poor performance is the increased confusability in the lexicon from adding new pronunciation variants. In this work, we propose a new framework for determining lexically confusable words based on inverted finite state transducers (FSTs); we also present experiments designed to test some of the implementation details of this framework. The method is evaluated by examining how well the algorithm predicts the errors in an ASR system. The model is able to generalize confusions learned from a training set to predict errors made by the speech recognizer on an unseen test set.
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Eric Fosler-Lussier, Ingunn Amdal, Hong-Kwang Jeff Kuo,
