An Adaptive Methodology for Facial Expression Transfer

2015 
This work presents a methodology which aims to improve and automate the process of generating facial animation for interactive applications. We propose an adaptive and semiautomatic methodology, which allows to transfer facial expressions from a face mesh to another. The model has three main stages: rigging, expression transfer and animation, where the output meshes can be used as key poses for blendshape-based animation. The input of the model is a face mesh in neutral pose and a set of face data that can be provided from different sources, such as artist crafted meshes and motion capture data. The model generates a set of blendshapes corresponding to the input set, with minimum user intervention. We opted to use a simple rig structure in order to provide a trivial correspondence either with sparse facial feature points based systems or dense geometric data supplied by RGBD based systems. The rig structure can be refined on-the-fly to deal with different input geometric data according to the need. The main contribution of this work is an adaptive methodology which aims to create facial animations with few user intervention and capable or transferring expression details according to the need and/or amount of input data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    3
    Citations
    NaN
    KQI
    []