NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild
2021
Recent history has seen a tremendous growth of work exploring implicit
representations of geometry and radiance, popularized through Neural Radiance
Fields (NeRF). Such works are fundamentally based on a (implicit) volumetric
representation of occupancy, allowing them to model diverse scene structure
including translucent objects and atmospheric obscurants. But because the vast
majority of real-world scenes are composed of well-defined surfaces, we
introduce a surface analog of such implicit models called Neural Reflectance
Surfaces (NeRS). NeRS learns a neural shape representation of a closed surface
that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
Even more importantly, surface parameterizations allow NeRS to learn (neural)
bidirectional surface reflectance functions (BRDFs) that factorize
view-dependent appearance into environmental illumination, diffuse color
(albedo), and specular "shininess." Finally, rather than illustrating our
results on synthetic scenes or controlled in-the-lab capture, we assemble a
novel dataset of multi-view images from online marketplaces for selling goods.
Such "in-the-wild" multi-view image sets pose a number of challenges, including
a small number of views with unknown/rough camera estimates. We demonstrate
that surface-based neural reconstructions enable learning from such data,
outperforming volumetric neural rendering-based reconstructions. We hope that
NeRS serves as a first step toward building scalable, high-quality libraries of
real-world shape, materials, and illumination. The project page with code and
video visualizations can be found at https://jasonyzhang.com/ners.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI