Generating description with multi-feature and saliency maps of image

2020 
Automatically generating the description of an image is a task that connects computer vision and natural language processing. It has gained more and more attention in the field of artificial intelligence. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with multi-feature weighted by object attention to represent images. We use LSTM (long short term memory), which is a RNN model, to translate multi-feature of images to text. Most existing methods use single CNN (convolution neural network) trained on ImageNet to extract image features which mainly focuses on objects in images. However, the context in the scene is also informative to image captioning. So we incorporate the scene feature extracted with CNN trained on Places205. We evaluate our model on MSCOCO dataset based on standard metrics. Experiments show that multi-feature performs better than single feature. In addition, the saliency weight on images emphasizes the salient objects in images as the subject in image descriptions. The results show that our model performs better than several state-of-the-art methods on image captioning.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []