A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
2021
The advancement of generative radiance fields has pushed the boundary of
3D-aware image synthesis. Motivated by the observation that a 3D object should
look realistic from multiple viewpoints, these methods introduce a multi-view
constraint as regularization to learn valid 3D radiance fields from 2D images.
Despite the progress, they often fall short of capturing accurate 3D shapes due
to the shape-color ambiguity, limiting their applicability in downstream tasks.
In this work, we address this ambiguity by proposing a novel shading-guided
generative implicit model that is able to learn a starkly improved shape
representation. Our key insight is that an accurate 3D shape should also yield
a realistic rendering under different lighting conditions. This multi-lighting
constraint is realized by modeling illumination explicitly and performing
shading with various lighting conditions. Gradients are derived by feeding the
synthesized images to a discriminator. To compensate for the additional
computational burden of calculating surface normals, we further devise an
efficient volume rendering strategy via surface tracking, reducing the training
and inference time by 24% and 48%, respectively. Our experiments on multiple
datasets show that the proposed approach achieves photorealistic 3D-aware image
synthesis while capturing accurate underlying 3D shapes. We demonstrate
improved performance of our approach on 3D shape reconstruction against
existing methods, and show its applicability on image relighting. Our code will
be released at https://github.com/XingangPan/ShadeGAN.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
54
References
0
Citations
NaN
KQI