ClothingNet: Cross-domain Clothing Retrieval with Feature Fusion and Quadruplet Loss

2020 
Cross-domain clothing retrieval is an active research topic because of its massive potential applications in fashion industry. Due to the large number of garment categories or styles, and different clothing appearances caused by different camera angles, different shooting conditions, different messy background environments, or different postures of the dressed human body, the retrieval accuracy of traditional consumer-to-shop scheme is always low. In this paper, based on the framework of deep convolution neural network, a novel cross-domain clothing retrieval method is proposed by using feature fusion and quadruplet loss function, which is named as ClothingNet. First, the pre-trained deep neural network Resnet-50 is adopted to extract feature map of clothing images. The extracted high-level image features can thus be merged with middle-level features, and the final feature representation of clothing images can be obtained by constraining the fusion feature values to a certain range in term of L2 norm. This fusion feature provides a comprehensive description of the differences between clothing images. For effectively training our ClothingNet, the cross-domain clothing images are organized in form of a quadruplet for calculating its loss function, and the network parameters can be optimized according to back propagation scheme via stochastic gradient descent of loss function. Our proposed method is validated on two public datasets for clothing retrieval, DARN and DeepFashion, showing that the top-50 retrieval accuracy is 35.67% and 53.52% respectively. Experimental results illustrate the effectiveness of our clothing retrieval method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    3
    Citations
    NaN
    KQI
    []