Deep Adversarial Quantization Network for Cross-Modal Retrieval
2021
In this paper, we propose a seamless multimodal binary learning method for cross-modal retrieval. First, we utilize adversarial learning to learn modality-independent representations of different modalities. Second, we formulate loss function through the Bayesian approach, which aims to jointly maximize correlations of modality-independent representations and learn the common quantizer codebooks for both modalities. Based on the common quantizer codebooks, our method performs efficient and effective cross-modal retrieval with fast distance table lookup. Extensive experiments on three cross-modal datasets demonstrate that our method outperforms state-of-the-art methods. The source code is available at https://github.com/zhouyu1996/DAQN.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
20
References
0
Citations
NaN
KQI