Personalized Sketch-Based Image Retrieval by Convolutional Neural Network and Deep Transfer Learning

2019 
The sketch-based image retrieval (SBIR) finds the natural images according to the features and rules defined by human beings. The retrieval results are generally similar in contour; however, their complete semantic information of the image is missing. From the user’s point of view, the same hand-drawn image may represent many different things, due to the semantic “one-to-many” category mapping relationship between the hand-drawn image and the natural image, that is the inherent ambiguity of hand-drawn image. In addition, the user’s drawing has many different characteristics, so the retrieval results generally cannot fully match with his intent. For the above-mentioned challenges, a personalized SBIR architecture is proposed, including a deep full convolutional neural network as a general model and a personalized model using transfer learning to achieve fine-grained image semantic feature. On the basis of the pre-trained general model and the images selected by the user in history, we construct the personalized model training dataset. Moreover, the user history feedback with the current hand-drawn image is combined as the input of the transfer learning model, to fine-tune the distribution of features in vector space, so that the neural network can learn the personalized semantic information. The experiments show that the general model has strong generalization ability with the mean average precision as 0.64 on the Flickr15 K dataset. The migration model can realize fine-grained image semantic vector space division, which perfectly satisfies the personalized retrieval requirements by hand-drawn sketch-based image input.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    11
    Citations
    NaN
    KQI
    []