Separating Pose and Expression in Face Images: A Manifold Learning Approach

2007 
Digital images of a person's face display a wide range of variations from differing pose, expression, and illumination conditions. Such variations can be modeled empirically by an appearance manifold in the image space. In this paper, we tackle the problem of learning the appearance manifold of faces in an unsupervised way. In particular, we aim to extract the substructure of facial expressions and the substructure of pose change separately. Two different distances, orbit-distance and group-distance, are defined which measure the difference of images due to expression and pose respectively. To reconstruct the complete structure of the manifold, we collect the local distances and compute a factorized isometric embedding of the original data. We test the proposed method on a known face database and demonstrate its capability of separating expression and pose via unsupervised learning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    5
    Citations
    NaN
    KQI
    []