Deep Facial Action Unit Recognition and Intensity Estimation from Partially Labelled Data

2019 
Research on facial action unit (AU) analysis typically require facial images that are labelled with those action units. While unlabelled facial images abound, labelling those images with action units or intensity is costly and time-consuming. Our approach makes it possible to analyze facial AUs when only some of the images have been labelled. We use many facial images to learn a deep framework that is able to take advantage of the facial representations. A restricted Boltzmann machine uses the available AU annotations to learn the AU label or intensity distribution. We train a support vector machine for AU recognition and a support vector regression for AU intensity estimation by maximizing the log likelihood of the AU mapping functions, taking into account the learned multiple AU distribution for all training data, while simultaneously diminishing errors between the predicted action units and ground-truth action unit occurrence or intensities for all labelled data. We perform experiments on two databases. The results demonstrate the superiority of a deep neural network for learning facial features, as well as the benefit of action unit label or intensity constraints for action unit occurrence recognition or intensity estimation in fully or semi-supervised scenarios.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    7
    Citations
    NaN
    KQI
    []