Clothes Image Caption Generation with Attribute Detection and Visual Attention Model

2020 
Abstract Fashion is a multi-billion-dollar industry, which is directly related to social, cultural, and economic implications in the real world. While computer vision has demonstrated remarkable success in the applications of the fashion domain, natural language processing technology has become contributed in the area, so that it can build the connection between clothes image and human semantic understandings. An element work for combing images and language understanding is how to generate a natural language sentence that accurately summarizes the contents of a clothes image. In this paper, we develop a joint attribute detection and visual attention framework for clothes image captioning. Specifically, in order to involve more attributes of clothes to learn, we first utilize a pre-trained Convolutional Neural Network (CNN) to learn the feature that can characterize more information about clothing attribute. Based on such learned feature, we then adopt an encoder/decoder framework, where we first encoder the feature of clothes and then and input it to a language Long Short-Term Memory(LSTM) model for decoding the clothes descriptions. The method greatly enhances the performance of clothes image captioning and reduces the misleading attention. Extensive simulations based on real-world data verify the effectiveness of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    1
    Citations
    NaN
    KQI
    []