A Comprehensive and Adversarial Approach to Self-Supervised Representation Learning

2020 
Self-supervised representation learning aims to generate effective representations for data instances without the need for manual labels, also known as unsupervised embedding learning, which has been a critical challenge in many existing semi-supervised and supervised learning tasks. This paper proposes a new self-supervised learning approach, called Super-AND, which extends the memory-based pretraining method AND model [13]. Super-AND has its unique set of losses that combines data augmentation in neighborhood discovery for more accurate anchor selection in embedding learning and further presents an adversarial training manner to learn more confident embeddings under the unsupervised setting. Experimental results exhibit that Super-AND outperforms all existing state-of-the-art self-supervised representation learning approaches and achieves an accuracy of 89.2% on the image classification task for CIFAR-10.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    0
    Citations
    NaN
    KQI
    []