Geometrical Analysis of Localization Error in Stereo Vision Systems

2013 
Determining an object location in a specific region is an important task in many machine vision applications. Different parameters affect the accuracy of the localization process. The quantization process in charge-coupled device of a camera is one of the sources of error that causes estimation rather than identifying the exact position of the observed object. A cluster of points, in the field of view of a camera are mapped into a pixel. These points form an uncertainty region. In this paper, we present a geometrical model to analyze the volume of this uncertainty region as a criterion for object localization error. The proposed approach models the field of view of each pixel as an oblique cone. The uncertainty region is formed via the intersection of two cones, each emanating from one of the two cameras. Because of the complexity in modeling of two oblique cones' intersection, we propose three methods to simplify the problem. In the first two methods, only four lines are used. Each line goes through the camera's lens, modeled as a pinhole, and then passes one of the four vertices of a square that is fitted around the circular pixel. The first proposed method projects all points of these four lines into an image plane. In the second method, the line-cone intersection is used instead of intersection of two cones. Therefore, by applying line-cone intersection, the boundary points of the intersection of the two cones are determined. In the third approach, the extremum points of the intersection of two cones are determined by the Lagrangain method. The validity of our methods is verified through extensive simulations. In addition, we analyze effects of parameters, such as the baseline length, focal length, and pixel size, on the amount of the estimation error.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    29
    Citations
    NaN
    KQI
    []