Dynamic Fusion Network for Light Field Depth Estimation
2021
Focus-based methods have shown promising results for the task of depth estimation in recent years. However, most existing focus-based depth estimation approaches depend on maximal sharpness of the focal stack. These methods ignore the spatial relationship between the focal slices. The problem of information loss caused by the out-of-focus areas in the focal stack poses challenges for this task. In this paper, we propose a dynamically multi-modal learning strategy which incorporates RGB data and the focal stack in our framework. Our goal is to deeply excavate the spatial correlation in the focal stack by designing the pyramid ConvGRU and dynamically fuse multi-modal information between RGB data and the focal stack in a adaptive way by designing the multi-modal dynamic fusion module. The success of our method is demonstrated by achieving the state-of-the-art performance on two light field datasets.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
17
References
0
Citations
NaN
KQI