Unified Neural Encoding of BTFs
2020
Realistic rendering using discrete reflectance measurements is challenging, because arbitrary directions on the light and view
hemispheres are queried at render time, incurring large memory requirements and the need for interpolation. This explains the
desire for compact and continuously parametrized models akin to analytic BRDFs; however, fitting BRDF parameters to complex
data such as BTF texels can prove challenging, as models tend to describe restricted function spaces that cannot encompass
real-world behavior. Recent advances in this area have increasingly relied on neural representations that are trained to reproduce
acquired reflectance data. The associated training process is extremely costly and must typically be repeated for each material.
Inspired by autoencoders, we propose a unified network architecture that is trained on a variety of materials, and which projects
reflectance measurements to a shared latent parameter space. Similarly to SVBRDF fitting, real-world materials are represented
by parameter maps, and the decoder network is analog to the analytic BRDF expression (also parametrized on light and view
directions for practical rendering application). With this approach, encoding and decoding materials becomes a simple matter of
evaluating the network. We train and validate on BTF datasets of the University of Bonn, but there are no prerequisites on either
the number of angular reflectance samples, or the sample positions. Additionally, we show that the latent space is well-behaved
and can be sampled from, for applications such as mipmapping and texture synthesis.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
32
References
14
Citations
NaN
KQI