Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
9673479 | Speech Communication | 2005 | 12 Pages |
Abstract
This paper is a report on current efforts at the Department of Speech, Music and Hearing, KTH, on data-driven multimodal synthesis including both visual speech synthesis and acoustic modeling. In the research we try to combine both corpus based methods with knowledge based models and to explore the best of the two approaches. In the paper an attempt to build formant-synthesis systems based on both rule-generated and database driven methods is presented. A pilot experiment is also reported showing that this approach can be a very interesting path to explore further. Two studies on visual speech synthesis are reported, one on data acquisition using a combination of motion capture techniques and one concerned with coarticulation, comparing different models.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Rolf Carlson, Björn Granström,