Compare Network with Task-dependent Embeddings for Few-shot Learning

2021 
Few-shot Learning, a recent research hotspot in computer vision, attempts to distinguish unseen classes with a few labeled samples. Although significant progress has been made in this field, difficulties still exist in promoting classification accuracy. To tackle this issue, we proposed an effective and novel approach called compare network with task-dependent embeddings for few-shot learning, which includes three modules: embedding module, self-attention module and compare module. With the features extracted by the embedding module, the self-attention module, a use of attention mechanisms as component, can fully capture the internal relevance of features in the context of a certain task so as to learn more discriminative and adaptive task-dependent embeddings by taking a view of the entire task. And the compare module, the direct use of attention mechanisms as metrics, can efficiently evaluate the similarity score between query and discriminative task-dependent embeddings. The effectiveness of our architecture has been verified on two standard classification benchmark, namely the miniImageNet and Omniglot. Experimental results show that our method achieved very competitive results when compared to the state-of-the-art method in terms of few-shot tasks and realized consistent improvements over baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []