Uniform Generic Representation for Single Sample Face Recognition

2020 
In this article, we propose a uniform generic representation (UGR) method to solve the single sample per person (SSPP) problem in face recognition, which aims to find consistency between the global and local generic representations. For the local generic representation, we require the probe patches of the same image to be constructed respectively by the corresponding patches of the same gallery image and the intra-class variation dictionaries. Therefore, the probe patches' coefficients, corresponding to patch gallery dictionaries, should be similar to each other. For the global generic representation, the probe image's coefficient, corresponding to the gallery dictionary, should be similar to those of its probe patches. In order to meet the two requirements, we combine local generic representation with global generic representation in soft form. We obtain the representation coefficients by solving a simple quadratic optimization problem. UGR has been evaluated on Extended Yale B, AR, CMU-PIE, and LFW databases. Experimental results show the robustness and effectiveness of our method to illumination, expression, occlusion, time variation, and pose.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []