Multi-Modal Fusion Architecture Search for Land Cover Classification Using Heterogeneous Remote Sensing Images

2021 
Optical and SAR modalities can provide the complementary information on land properties for better land cover classification. Most of existing multi-modal land cover classification methods based on two-streams convolutional neural networks (CNNs), which obtained fusion features by merging optical and SAR features that come from manually selective layer of different streams. However, they ignored different semantic between manually selective optical and SAR features, which might result in suboptimal fusion features. We tackle the problem of finding good fusion architectures for multimodal land cover classification inspired by the network architecture search (NAS), and introduces the multi-modal fusion architecture search network (M2PASNet). Extensive experimental results show superior performances of our work on a broad co-registered optical and SAR dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    1
    Citations
    NaN
    KQI
    []