Towards learning line descriptors from patches: a new paradigm and large-scale dataset

2020 
Line feature description is important for image matching. However, its development is much slow compared to point description, and is still in the stage of manual design, which suffers from the problem of weak distinguish ability and poor robustness under complex conditions. To improve on this situation, this paper proposes to learn the line feature description based on convolutional neural network. First, a large-scale dataset consisting of about 229,000 labeled pairs of matched lines is built for training and testing. Then, a paradigm for learning the line descriptors based on the constructed line dataset is proposed. Specifically, the line is represented uniquely by the stacked mean and standard deviation patches of the support regions of those points lying on the line, which is subsequently fed into the L2Net to output the required line descriptors directly. Based on the line matching principals, the network is also trained with the triplet loss that is widely used for learning point descriptors. Experimental results for line matching and curve matching both demonstrate the superiority and effectiveness of the proposed learning-based descriptor, especially, averaged increases of 4.66 ~ 5.7% mAPs, 10.59 ~ 12.10% mAPs, 0.96 ~ 3.75% mAPs and 3.73% mAP on testing subset, Oxford dataset, line dataset and curve dataset are obtained compared to handcrafted descriptors. As an application, we apply the learned line descriptor to image stitching and also obtain good results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    1
    Citations
    NaN
    KQI
    []