Salient FlowNet and Decoupled LSTM Network for Robust Visual Odometry

2019 
An end-to-end CNN-LSTM network with salient feature attention and context-guided feature selection mechanism for robust visual odometry (VO) on monocular image sequence is developed in this paper. Deep learning-based visual odometry methods have drawn significant concerns comparing with traditional methods. Existing learning-based VO methods usually ignored the redundant features would increase error accumulation, and their rotational and translational motion parameters coupling would enlarge trajectory drift. A scheme on enhancing the visual salient features and decoupling motion parameters to alleviate these problems is investigated in our approach. The classical FlowNet paralleled with a VGG-based salient feature model to reinforce perceptive fields is designed based on an attention mechanism to extract the prominent geometric features from successive monocular images. Furthermore, to reduce the coupling of different motion patterns, a motion decoupled dual Long Short-Term Memory (LSTM) scheme based on guided feature selection mechanism is designed to select guided features to separately regress rotational and translational parameters. Experiments on KITTI dataset show competitive performance of the proposed approach compared with state-of-the-art deep learning based visual odometry methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    1
    Citations
    NaN
    KQI
    []