When BERT Meets Bilbo: A Learning Curve Analysis of Pretrained Language Model on Disease Classification
2020
Natural language processing tasks in the health domain often deal with limited amount of labeled data. Pre-trained language models show us a promising way to compensate for the lake of training data, such as Bidirectional Encoder Representations from Transformers (BERT). However, previous downstream tasks often used training data at such a large scale that is unlikely to obtain in health domain. In this work, We conducted a learning curve analysis on a disease classification task to study the behavior of BERT and baseline models can still benefit downstream tasks when training data are relatively small in the context of health NLP. 1
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
8
References
0
Citations
NaN
KQI