End-to-end Saliency-Guided Deep Image Retrieval

2020 
A challenging issue of content-based image retrieval (CBIR) is to distinguish the target object from cluttered backgrounds, resulting in more discriminative image embeddings, compared to situations where feature extraction is distracted by irrelevant objects. To handle the issue, we propose a saliency-guided model with deep image features. The model is fully based on convolution neural networks (CNNs) and it incorporates a visual saliency detection module, making saliency detection a preceding step of feature extraction. The resulted saliency maps are utilized to refine original inputs and then compatible image features suitable for ranking are extracted from refined inputs. The model suggests a working scheme of involving saliency information into existing CNN-based CBIR systems with minimum impacts on the them. Some work assist image retrieval with other methods like object detection or semantic segmentation, but they are not so fine-grained as saliency detection, meanwhile some of them require additional annotations to train. In contrast, we train the saliency module in weak-supervised end-to-end style and do not need saliency ground truth. Extensive experiments are conducted on standard image retrieval benchmarks and our model shows competitive retrieval results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []