Unsupervised face image retrieval using adjacent weighted component-based patches

2016 
Face Image Retrieval (FIR) remains a challenging problem in many real word applications due to various pose and illumination alterations of face images. State-of-the-art systems attain good precision by utilizing Bag-of-Visual-Words (BoVW) retrieval model, but their average precision (AP) decline rapidly while retrieving face images, primarily because they disregard face-specific features, and generate low discriminative visual words, mainly at the quantization level. In this paper, we employ facial patch-based features to preserve more discriminative features at patch-level in order to achieve a higher precision. We take advantage of the TF-IDF voting scheme to give more weights to more discriminative facial features. First, features are extracted from facial components instead of the whole face which preserves more informative and person-specific features. Then, an adjacent patch-based comparison is performed to preserve more discriminative features at patch-level while scoring candidate face images. Finally, a weighting approach is implemented to give even more discrimination to different features from different face components. Experimental results on 1,000 face images from LFW (Labeled Faces in the Wild) indicate the superiority of proposed approach by means of higher mean average precision (mAP).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []