Robust dense correspondence using deep convolutional features

2019 
Image matching is a challenging problem as different views often undergo significant appearance changes caused by illumination changes, scale variations, large displacement, and deformation. Most state-of-the-art algorithms, however, still cannot perform well enough in handling challenging real-world cases, especially in different objects and scenes. In this paper, we explore deep features extracted from pretrained convolutional neural networks to help the estimation of image matching so that dense pixel correspondence can be built. As the deep features are able to describe the image structures and details hierarchically, the matching method based on these features is able to match different scenes and object appearances effectively. We analyze the deep features and compare them with other robust features, e.g., SIFT. Extensive experiments on benchmark datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods in terms of visually matching quality and accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    4
    Citations
    NaN
    KQI
    []