DML: Differ-Modality Learning for Building Semantic Segmentation
2022
This work critically analyzes the problems arising from differ-modality building semantic segmentation in the remote sensing domain. With the growth of multimodality datasets, such as optical, synthetic aperture radar (SAR), light detection and ranging (LiDAR), and the scarcity of semantic knowledge, the task of learning multimodality information has increasingly become relevant over the last few years. However, multimodality datasets cannot be obtained simultaneously due to many factors. Assume that we have SAR images with reference information in one place and optical images without reference in another; how to learn relevant features of optical images from SAR images? We refer to it as differ-modality learning (DML). To solve the DML problem, we propose novel deep neural network architectures, which include image adaptation, feature adaptation, knowledge distillation, and self-training (SL) modules for different scenarios. We test the proposed methods on the differ-modality remote sensing datasets (very high-resolution SAR and RGB from SpaceNet 6) to build semantic segmentation and to achieve a superior efficiency. The presented approach achieves the best performance when compared with the state-of-the-art methods.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
50
References
0
Citations
NaN
KQI