Recent developments in large-scale tie-point matching

2016 
Abstract Feature matching – i.e. finding corresponding point features in different images to serve as tie-points for camera orientation – is a fundamental step in photogrammetric 3D reconstruction. If the input image set is large and unordered, which is becoming increasingly common with the spread of photogrammetric recording to untrained user groups and even crowd-sourced geodata collection, the bottleneck of the reconstruction pipeline is the matching step, for two reasons. (i) Image acquisition without detailed viewpoint planning requires a denser set of viewpoints with larger overlaps, to ensure appropriate coverage of the object of interest and to guarantee sufficient redundancy for reliable reconstruction in spite of the unoptimised network geometry. As a consequence, there is a large number of images with overlapping viewfields, resulting in a more expensive matching step than, say, a regular block geometry. (ii) In the absence of a carefully pre-planned recording sequence it is not even known which images overlap. One thus faces the even bigger challenge to determine which pairs of images even can have tie-points and should therefore be fed into the matching procedure. In this paper we attempt a systematic survey of the state-of-the-art for tie-point generation in unordered image collections, including recent developments for very large image sets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    48
    Citations
    NaN
    KQI
    []