NoiER: An Approach for Training More Reliable Fine-Tuned Downstream Task Models

2022 
The recent development in pretrained language models that are trained in a self-supervised fashion, such as BERT, is driving rapid progress in natural language processing. However, their brilliant performance is based on leveraging syntactic artefacts of the training data rather than fully understanding the intrinsic meaning of language. The excessive exploitation of spurious artefacts is a problematic issue: the distribution collapse problem, which is the phenomenon that the model fine-tuned on downstream tasks is unable to distinguish out-of-distribution sentences while producing a high-confidence score. In this paper, we argue that the distribution collapse is a prevalent issue in pretrained language models and propose noise entropy regularisation (NoiER) as an efficient learning paradigm that solves the problem without auxiliary models and additional data. The proposed approach improved traditional out-of-distribution detection evaluation metrics by 55% on average compared to the original fine-tuned models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []