Vehicle detection from multi-modal aerial imagery using YOLOv3 with mid-level fusion (Conference Presentation)

2020 
Target detection is an important problem in remote-sensing with crucial applications in law-enforcement, military and security surveillance, search-and-rescue operations, and air traffic control, among others. Owing to the recently increased availability of computational resources, deep-learning based methods have demonstrated state-of- the-art performance in target detection from unimodal aerial imagery. In addition, owing to the availability of remote-sensing data from various imaging modalities, such as RGB, infrared, hyper-spectral, multi-spectral, synthetic aperture radar, and lidar, researchers have focused on leveraging the complementary information offered by these various modalities. Over the past few years, deep-learning methods have demonstrated enhanced performance using multi-modal data. In this work, we propose a method for vehicle detection from multi-modal aerial imagery, by means of a modified YOLOv3 deep neural network that conducts mid-level fusion. To the best of our knowledge, the proposed mid-level fusion architecture is the first of its kind to be used for vehicle detection from multi-modal aerial imagery using a hierarchical object detection network. Our experimental studies corroborate the advantages of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []