Self-attention driven adversarial similarity learning network

2020 
Abstract Similarity learning is a kind of machine learning algorithm that aims to measure the relevance between given objects. However, conventional similarity learning algorithms usually measure the distance between the entire given objects in the latent feature space. Consequently, the obtained similarity scores only represent how close are the entire given objects, but are incapable of demonstrating which part of them are similar to each other and how semantically similar are they. To address the above problems, in this paper, we propose a self-attention driven adversarial similarity learning network. Discriminative self-attention weights are firstly assigned to different regions of the given objects. The similarity learning step measures the relevance between these self-attention weighted feature maps of given objects under various topic vectors. The topic vectors are conditioned to capture and preserve hidden semantic information within data distribution by a generator-discriminator model with adversarial loss. This model aims to generate objects from topic vectors and propagates the difference between the generated and the real objects back to the similarity learning step, which forces the topic vectors to not only assign discriminative similarity scores to different object pairs but also further mine the hidden semantic information within data distribution. The final similarity scores represent how tight the given objects are connected to the topics. In addition, the regions with higher self-attention weights make more contribution to the discriminative similarity scores. The effectiveness of the proposed method is demonstrated through evaluations based on image retrieval task and document retrieval task and compared against various state-of-the-art algorithms in the field. The visualization results of topic vectors and self-attention weighted feature maps are demonstrated to make our proposed method explainable.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    60
    References
    14
    Citations
    NaN
    KQI
    []