Inertial and imaging sensor fusion for image-aided navigation with affine distortion prediction

2012 
The Air Force Institute of Technology's Advanced Navigation Technology center has invested significant research time and effort into alternative precision navigation methods in an effort to counteract the increasing dependency on Global Positioning System for precision navigation. The use of visual sensors has since emerged as a valuable and feasible precision navigation alternative which, when coupled with inertial navigation sensors, can reduce navigation estimation errors by approximately two orders of magnitude [1] when compared to inertial-only solutions. A key component of many image-aided navigation algorithms is the requirement to detect and track salient features over many frames of an image sequence. However, feature matching accuracy is drastically reduced when the image sets differ in 3-D pose due to the affine distortions induced on feature descriptors [2]. In this research, this is counteracted by digitally simulating affine distortions on input images in order to calculate more accurate feature descriptors, which provide improved matching across large changes in viewpoint. These techniques are experimentally demonstrated in an outdoor environment with a consumer-grade inertial sensor and three imaging sensors, one of which is orthogonal to the others. False matches generated by using the orthogonal camera are shown to degrade the navigation solution if change in 3-D pose is not accounted for. Using a tactical-grade inertial sensor coupled with GPS position data as the truth source, the improved image-aided navigation algorithm, which accounts for changes in 3-D pose, is shown to reduce navigation errors by 24% in position, 16% in velocity, and 35% in attitude compared to the standard two-camera image-aided navigation setup.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    3
    Citations
    NaN
    KQI
    []