A Method of Relation Extraction Using Pre-training Models

2020 
Relation Extraction (RE), as an essential task of Natural Language Processing (NLP), aims to extract potential relations between two entities in a sentence. It is a crucial step in information extraction from unstructured data and building a Knowledge Graph (KG). The performance of deep learning methods for RE, like Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN), heavily depends on the quality and scale of the training set. Recently, pre-training models like BERT and ERNIE, have achieved State-Of-The-Art (SOTA) results in many NLP tasks, because they can obtain the prior semantic knowledge during the procedure of pre-training. Therefore, it is interesting to know whether the performance of RE can be improved utilizing the pre-training models. In this paper, we propose a method of RE using two kinds of pre-training models: BERT and ERNIE. First, in the input sequence, unique symbols are appended around the entities. RE is then regarded as a text classification task, and the prior semantic knowledge obtained by pre-training models is used to improve the performance. Experiments are carried on the SemEval 2010 Task 8 dataset. Results demonstrate that the method we proposed improves the performance of RE compared with previous approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    2
    Citations
    NaN
    KQI
    []