Strict Subspace and Label-Space Structure for Domain Adaptation

2019 
One of the most important issues of transfer learning is domain adaptation which aims at adapting a classifier or model trained in the source domain for use in the target domain, while two domains may be different but related. Intuitively, a good feature representation across domain is crucial. In this paper, we put forward a novel feature representation approach for unsupervised domain adaptation, namely Strict Subspace and Label-space Structure for Domain Adaptation (SSLS). SSLS learns two feature representations that project the source domain and target domain into two different subspaces where marginal and conditional distribution shift can be reduced effectively. Specially, we make the distances of corresponding points in the projection subspaces as well as the label space close by Laplacian graph, which will guarantee the strictness of subspace structure and the quality of the pseudo labels. Extensive experiments verify that our method is superior to several state-of-the-art methods on three real world cross-domain visual recognition tasks Office+Caltech, USPS+MNIST, and PIE.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []