Few-Shot Text Classification with External Knowledge Expansion

2021 
The performance of most current models for text classification drops dramatically when annotated data is scarce. In such challenging scenarios, the existing models for few-shot text classification are not accurate or robust enough due to limited capture of semantic knowledge. In this paper, we propose a method of few-shot text classification based on external knowledge expansion and two strategies of expansion to supervise richer information during training and prediction, by leveraging WordNet and pre-trained model BERT. We split texts into sentences, develop techniques to select terms to semantically expand sentences based on knowledge and measure the text instance representation after knowledge expansion. In this way, we find the method is capable of improving the performance on the task of few-shot text classification. We evaluate our method on two English text classification datasets - IMDB and ASRS across a range of training set sizes. Experiment results show that by knowledge expansion, our method is robust and yields better or comparable performance to the state-of-the-art methods on both datasets, which achieves 2.7% relative improvement compared with previous method on the ASRS test set with the training set size of 380.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []