Regional Self-Attention Convolutional Neural Network for Facial Expression Recognition

2022 
Facial expression recognition (FER) has been a challenging task in the field of artificial intelligence. In this paper, we propose a novel model, named regional self-attention convolutional neural network (RSACNN), for FER. Different from the previous methods, RSACNN makes full use of the facial texture of expression salient region, so yields a robust feature representation for FER. The proposed model contains two novel parts: regional local multiple pattern (RLMP) based on the improved K-means algorithm and the regional self-attention module (RSAM). First, RLMP uses the improved K-means algorithm to dynamically cluster the pixels to ensure the robustness of texture features with expression salient variation. Besides, the texture description is enhanced by extending the binary pattern to the multiple patterns and integrating the information of gray difference between pixels in the region. Next, RSAM can adaptively form weights for each region through the self-attention mechanism, and use rank regularization loss (RRLoss) to constrain the weights of different regions. By jointly combining RLMP and RSAM, RSACNN can effectively enhance the feature representation of expression salient regions, so that the performance of expression recognition can be improved. Extensive experiments on public datasets, i.e. CK
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []