Multi-decoder Based Co-attention for Image Captioning

2018 
Recently image caption has gained increasing attention in artificial intelligence. Existing image captioning models typically adopt visual mechanism only once to capture the related region maps, which is difficult to attend the regions relevant to each generated word effectively. In this paper, we propose a novel multi-decoder based co-attention framework for image captioning, which is composed of multiple decoders that integrate the detection-based mechanism and free-form region based attention mechanism. Our proposed approach effectively produce more precise caption by co-attending the free-form regions and detections. Particularly, given the “Teacher-Forcing”, which leads to a mismatch between training and testing, and exposure bias, we use a reinforcement learning approach to optimize. The proposed method is evaluated on the benchmark MSCOCO dataset, and achieves state-of-the-art performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    0
    Citations
    NaN
    KQI
    []