LSTM vs. GRU for Arabic Machine Translation

2021 
The same Machine Translation (MT) approach may not work for European languages as for Arabic, because of its structure. MT based on Neural Networks methods has recently become an alternative approach to the statistical MT. In this paper, a case study is presented on how different sequence to sequence Deep Learning (DL) models perform in the task of Arabic MT. A comprehensive comparison between these models based mainly on: Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Bidirectional LSTM (BiLSTM) and Bidirectional GRU (BiGRU) is presented. Specifically, each input sequence will be translated into English one using an Encoder-Decoder model based on the four architectures with an attention mechanism. Furthermore, we study the impact of different preprocessing techniques on Arabic MT.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []