Discriminative Feature Adaptation for cross-domain facial expression recognition

2016 
Facial expression recognition is an important problem in many face-related tasks, such as face recognition, face animation, affective computing and human-computer interface. Existing methods mostly assume that testing and training face images are captured under the same condition and from the same population. Such assumption is, however, not valid in real-world applications, where face images could be taken from varying domains due to different cameras, illuminations, or populations. Motivated by recent progresses in domain adaptation, this paper proposes an unsupervised domain adaptation method, called discriminative feature adaptation (DFA), which requires for training a set of labelled face images in the source domain and some additional unlabelled face images in the target domain. It seeks for a feature space to represent face images from different domains such that two objectives are fulfilled: (i) mismatches between the feature distributions of these face images are minimized, and (ii) features are discriminative among these face images with respect to their facial expressions. Compared with existing methods, the proposed method can more effectively adapt discriminative features for recognizing facial expressions in various domains. Evaluation experiments have been done on four public facial expression databases: CK+, JAFFE, PICS, and FEED. The results demonstrate the superior performance of the proposed method over competing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    19
    Citations
    NaN
    KQI
    []