Multimodal-Temporal Fusion: Blending Multimodal Remote Sensing Images to Generate Image Series With High Temporal Resolution

2019 
This paper aims to tackle a general but interesting cross-modality problem in remote sensing community: can multimodal images help to generate synthetic images in time series and improve temporal resolution? To this end, we explore multimodal-temporal fusion, in which we attempt to leverage the availability of additional cross-modality images to simulate the missing images in time series. We propose a multimodal-temporal fusion framework, and mainly focus on two kinds of information for the simulation: intra-modal cross-modality information and inter-modal temporal information. To exploit the cross-modality information, we adopt available paired images and learn a mapping between different modality images using a deep neural network. Considering temporal dependency among time-series images, we formulate a temporal constraint in the learning to encourage temporal consistent results. Experiments are conducted on two cross-modality image simulation applications (SAR to visible and visible to SWIR), and both visual and quantitative results demonstrate that the proposed model can successfully simulate missing images with cross-modality data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    3
    Citations
    NaN
    KQI
    []