Improving Back-Translation with Iterative Filtering and Data Selection for Sinhala-English NMT

2021 
Neural Machine Translation (NMT) requires a large amount of parallel data to achieve reasonable results. For low resource settings such as Sinhala-English where parallel data is scarce, NMT tends to give sub-optimal results. This is severe when the translation is domain-specific. One solution for the data scarcity problem is data augmentation. To augment the parallel data for low resource language pairs, commonly available large monolingual corpora can be used. A popular data augmentation technique is Back-Translation (BT). Over the years, there have been many techniques to improve Vanilla BT. Prominent ones are Iterative BT, Filtering, and Data selection. We employ these in Sinhala - English extremely low resource domain-specific translation in order to improve the performance of NMT. In particular, we move forward from previous research and show that by combining these different techniques, an even better result can be obtained. Our combined model provided a +3.0 BLEU score gain over the Vanilla NMT model and a +1.93 BLEU score gain over the Vanilla BT model for Sinhala → English translation. Furthermore, a +0.65 BLEU score gain over the Vanilla NMT model and a +2.22 BLEU score gain over the Vanilla BT model were observed for English → Sinhala translation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []