Position-guided transformer for image captioning
2022
Transformer-based frameworks have shown superiorities in image captioning. However, such frameworks are strenuous to consider geometric interrelations among visual contents in an image, as well as fail to prevent changes in the distribution of each layer’s input in self-attention. In this work, we first propose a Bi-Positional Attention (BPA) module, which incorporates absolute and relative position encoding to precisely explore internal relations between objects and their geometric information in an image. Additionally, we use a Group Normalization (GN) method inside BPA to relieve shifts of the distribution and better exploit the channel dependence of visual features. To validate our proposals, we apply BPA and GN into the original Transformer to constitute our Position-Guided Transformer (PGT) network, which learns a more comprehensive positional representations to augment spatial interactions among objects for image captioning. We conduct extensive experiments to verify the effectiveness of our model. Compared with non-pretraining state-of-the-art methods, experimental results on the MSCOCO benchmark dataset demonstrate that our PGT achieves competitive performance, reaching 134.2% CIDEr score on the Karpathy split with a single model, and 136.2% CIDEr score on the official testing server with an ensemble configuration.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI