Semi-Supervised Weight Learning for the Spatial Search Method in ConvNet-Based Image Retrieval

2016 
As the state-of-the-art ConvNet-based image retrieval method, spatial search has shown excellent retrieval performance and outperformed other competitors. A key component of this method is a weighted combination of distances evaluated at different regions of a query image. However, these weights are currently manually tuned, by a trial-and-error based exhaustive search. This not only incurs a lengthy parameter tuning process, but is also hard to guarantee the optimality of the tuned weights. Moreover, these weights may not be generally applied when the nature of image data set changes. To improve this situation, we propose to automatically learn the combination weights based on retrieval groundtruth. Specifically, we develop a method, called semi-supervised weight learning (SWL), based on the framework of distance metric learning. In addition to generating triplet constraints with retrieval groundtruth, we leverage unlabelled images to generate numerous unsupervised constraints to stabilise the learning process and improve learning efficiency. By linking with the latest primal solver of linear support vector machines, an efficient algorithm is put forward to solve the resulting large-scale optimization problem. Experimental results on three benchmark data sets and a newly collected archival photo data set demonstrate the effectiveness of the proposed weight learning approach. It achieves comparable or better retrieval performance than the manual tuning approach, especially on the new archival photo data set.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    0
    Citations
    NaN
    KQI
    []