Domain-aware Stacked AutoEncoders for Zero-shot Learning

2020 
Abstract Zero-shot learning (ZSL), which focuses on transferring the knowledge from the seen (source) classes to unseen (target) ones, is getting more and more attention in the computer vision community. However, there often has a large domain gap between the source and target classes, resulting in the projection domain shift problem. To this end, we propose a novel model, named Domain-aware Stacked AutoEncoders (DaSAE), that consists of two interactive stacked auto-encoders to learn the domain-aware projections for adapting source and target domains respectively. In each of them, the first-layer encoder aims to project a visual feature vector into the semantic space, and the second-layer encoder connects the semantic description of a sample with its label directly. Meanwhile, the two-layer decoders seek to reconstruct the visual representation from the label information and semantic description successively. Moreover, the manifold regularization that explores the manifold structure residing in the target data is integrated to the basic DaAE, which further improves the generalization ability of our model. Extensive experiments on the benchmark datasets clearly demonstrate that our DaSAE outperforms the state-of-the-art alternatives by the significant margins.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    65
    References
    0
    Citations
    NaN
    KQI
    []