LightGAN: A Deep Generative Model for Light Field Reconstruction

2020 
A light field image captured by a plenoptic camera can be considered a sampling of light distribution within a given space. However, with the limited pixel count of the sensor, the acquisition of a high-resolution sample often comes at the expense of losing parallax information. In this work, we present a learning-based generative framework to overcome such tradeoff by directly simulating the light field distribution. An important module of our model is the high-dimensional residual block, which fully exploits the spatio-angular information. By directly learning the distribution, our approach can generate both high-quality sub-aperture images and densely-sampled light fields. Experimental results on both real-world and synthetic datasets demonstrate that the proposed method outperforms other state-of-the-art approaches and achieves visually more realistic results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    9
    Citations
    NaN
    KQI
    []