Co-embedding: a semi-supervised multi-view representation learning approach
2021
Learning an expressive representation from multi-view data is a crucial step in various real-world applications. In this paper, we propose a semi-supervised multi-view representation learning approach named Co-Embedding. Unlike conventional multi-view representation learning methods use joint to concatenate different views, which ignores information exchange between views and limits the methods’ ability to utilize complementarity of multi-view data, Co-Embedding fulfills mutual help between views by coordinating and exchanging information between views. Specifically, we first build a weighted deep metric learning based multi-view representation learning framework in Co-Embedding. This framework models multi-view information through coordinate alignment, which is customized to exploit the complementary information from well-learned representations to help with modeling the under-learned representations. Then, by exploiting consensus property and neighborhood information, we design a multi-view label propagation algorithm that labels unlabeled data for Co-Embedding. Experimental results on seven benchmark multi-view datasets demonstrate the effectiveness of Co-Embedding.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
74
References
0
Citations
NaN
KQI