Visual Speech Synthesis Based on Learning Model

2009 
In order to generate more realistic mouth animation in visual speech synthesis,this paper proposed a method based on a two-level learning model.The authors can learn the potential mapping relationship between acoustic features and the visual features through the combination of HMM(Hidden Markov Models) and GA(Genetic Algorithms).This model can decrease the redundant information in abstracting acoustic features for large acoustic sample space and predict more realistic mouth animation.In addition,this paper also proposed a new method based on FAP points in mouth feature expression.This method can eliminate the effect by illumination and decrease the dimensions of mouth feature vector.It improves the speed of training and synthesis.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []