2D Self-attention Convolutional Recurrent Network for Offline Handwritten Text Recognition
2021
Offline handwritten text recognition is still a big challenging problem due to various backgrounds, noises, diversity of writing styles, and multiple touches between characters. In this paper, we propose a model of 2D Self-Attention Convolutional Recurrent Network (2D-SACRN) for recognizing handwritten text lines. The 2D-SACRN model consists of three main components: 1) a 2D self-attention based convolutional feature extractor that extracts a feature sequence from an input image; 2) a recurrent encoder that encodes the feature sequence into a sequence of label probabilities; and 3) a CTC-decoder that decodes the sequence of label probabilities into the final label sequence. In this model, we present a 2D self-attention mechanism in the feature extractor to capture the relationships between widely separated spatial regions in an input image. In the experiment, we evaluate the performance of the proposed model on the three datasets: IAM Handwriting, Rimes, and TUAT Kondate. The experimental results show that the proposed model achieves similar or better accuracy when compared to state-of-the-art models in all datasets.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
35
References
1
Citations
NaN
KQI