Incomplete Cross-modal Retrieval with Dual-Aligned Variational Autoencoders

2020 
Learning the relationship between the multi-modal data, e.g., texts, images and videos, is a classic task in the multimedia community. Cross-modal retrieval (CMR) is a typical example where the query and the corresponding results are in different modalities. Yet, a majority of existing works investigate CMR with an ideal assumption that the training samples in every modality are sufficient and complete. In real-world applications, however, this assumption does not always hold. Mismatch is common in multi-modal datasets. There is a high chance that samples in some modalities are either missing or corrupted. As a result, incomplete CMR has become a challenging issue. In this paper, we propose a Dual-Aligned Variational Autoencoders (DAVAE) to address the incomplete CMR problem. Specifically, we propose to learn modality-invariant representations for different modalities and use the learned representations for retrieval. We train multiple autoencoders, one for each modality, to learn the latent factors among different modalities. These latent representations are further dual-aligned at the distribution level and the semantic level to alleviate the modality gaps and enhance the discriminability of representations. For missing instances, we leverage generative models to synthesize latent representations for them. Notably, we test our method with different ratios of random incompleteness.Extensive experiments on three datasets verify that our method can consistently outperform the state-of-the-arts.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    10
    Citations
    NaN
    KQI
    []