NILC at SR’20: Exploring Pre-Trained Models in Surface Realisation
2020
This paper describes the submission by the NILC Computational Linguistics research group of the University of S ao Paulo/Brazil to the English Track 2 (closed sub-track) at the Surface Realisation Shared Task 2020. The success of the current pre-trained models like BERT or GPT-2 in several tasks is well-known, however, this is not the case for data-to-text generation tasks and just recently some initiatives focused on it. This way, we explore how a pre-trained model (GPT-2) performs on the UD-to-text generation task. In general, the achieved results were poor, but there are some interesting ideas to explore. Among the learned lessons we may note that it is necessary to study strategies to represent UD inputs and to introduce structural knowledge into these pre-trained models.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
10
References
0
Citations
NaN
KQI