Learning explicitly transferable representations for domain adaptation

2020 
Abstract Domain adaptation tackles the problem where the training source domain and the test target domain have distinctive data distributions, and therefore improves the generalization ability of deep models. The very popular mechanism of domain adaptation is to learn a new feature representation which supposed to be domain-invariant, so that the classifiers trained on the source domain can be directly applied to the target domain. However, recent work reveals that learning new feature representations may potentially deteriorate the adaptability of the original features and increase the expected error bound of the target domain. To address this, we propose to adapt classifiers rather than features. Specifically, we fill in the distribution gaps between domains by some additional transferable representations which are explicitly learned from the original features while keep the original features unchanged. In addition, we argue that transferable representations should be able to be translated from one domain to the other with appropriate mappings. At the same time, we introduce conditional entropy to mitigate the semantic confusion during mapping. Experiments on both standard and large-scale datasets verify that our method is able to achieve the new state-of-the-art results on unsupervised domain adaptation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    6
    Citations
    NaN
    KQI
    []