Vehicle Tracking Using Surveillance With Multimodal Data Fusion

2018 
Vehicle location prediction or vehicle tracking is a significant topic within connected vehicles. This task, however, is difficult if merely a single modal data is available, probably causing biases and impeding the accuracy. With the development of sensor networks in connected vehicles, multimodal data are becoming accessible. Therefore, we propose a framework for vehicle tracking with multimodal data fusion. Specifically, we fuse the results of two modalities, images and velocities, in our vehicle-tracking task. Images, being processed in the module of vehicle detection, provide visual information about the features of vehicles, whereas velocity estimation can further evaluate the possible locations of the target vehicles, which reduces the number of candidates being compared, decreasing the time consumption and computational cost. Our vehicle detection model is designed with a color-faster R-CNN, whose inputs are both the texture and color of the vehicles. Meanwhile, velocity estimation is achieved by the Kalman filter, which is a classical method for tracking. Finally, a multimodal data fusion method is applied to integrate these outcomes so that vehicle-tracking tasks can be achieved. Experimental results suggest the efficiency of our methods, which can track vehicles using a series of surveillance cameras in urban areas.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    0
    Citations
    NaN
    KQI
    []