Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6960891 | Speech Communication | 2017 | 10 Pages |
Abstract
The results suggest that after exposure to visually exaggerated speech, listeners had the ability to adapt alongside the conflicting audiovisual signals. In addition, subjects trained with enhanced visual cues (regimes 3 and 4) achieved better audiovisual recognition for a number of phoneme classes than those who were trained with unmodified visual speech (regime 2). There was no evidence of an improvement in the subsequent audio-only listening skills, however. The subjects' adaptation to the conflicting audiovisual signals may have slowed down auditory perceptual learning, and impeded the ability of the visual speech to improve the training gains.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Najwa Alghamdi, Steve Maddock, Jon Barker, Guy J. Brown,