Article ID Journal Published Year Pages File Type
526708 Image and Vision Computing 2016 12 Pages PDF
Abstract

•A novel technique for automatic lip-reading is proposed.•A weighted finite state transducer cascade is used incorporating a confusion model.•Performance was slightly better than a standard HMM system.•The issue of suitable units for automatic lip-reading was also studied.•It was found that visemes are sub-optimal because of reduced contextual modelling.

Automatic lip-reading (ALR) is a challenging task because the visual speech signal is known to be missing some important information, such as voicing. We propose an approach to ALR that acknowledges that this information is missing but assumes that it is substituted or deleted in a systematic way that can be modelled. We describe a system that learns such a model and then incorporates it into decoding, which is realised as a cascade of weighted finite-state transducers. Our results show a small but statistically significant improvement in recognition accuracy. We also investigate the issue of suitable visual units for ALR, and show that visemes are sub-optimal, not but because they introduce lexical ambiguity, but because the reduction in modelling units entailed by their use reduces accuracy.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,