Mono and Multi-Lingual Machine Translation Using Deep Attention-based Models

2021 
Machine translation (MT) is the automatic translation of natural language texts. The complexities and incompatibilities of natural languages make MT an arduous task facing several challenges, especially when it is to be compared to a human translation. Neural Machine Translation (NMT) has to make MT results closer to human expectations with the advent of deep-learning artificial intelligence. The newest deep learning approaches are based on Recurrent Neural Networks (RNN), transformers, complex convolutions, and employing encoder/decoder pairs. In this work, we propose a new attention-based encoder-decoder model with monolingual and multilingual for MT. The Training has been several models with single languages and one model with several languages on both of our long short-term memory (LSTM) architecture and Transformer. We show that the Transformer outperforms the LSTM within our specific neural machine translation task. These models are evaluated using IWSLT2016 datasets, which contain a training dataset for three languages, test2015 and test2016 dataset for testing. These experiments show a 93.9% accuracy, which we can estimate as a 5 BLEU point improvement over the previous studies. (metric used in MT).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []