Abstract The wide adoption of path‐tracing algorithms in high‐end realistic rendering has stimulated many diverse research initiatives. In this paper we present a coherent survey of methods that utilize Monte Carlo integration for estimating light transport in scenes containing participating media. Our work complements the volume‐rendering state‐of‐the‐art report by Cerezo et al. [ CPP*05 ]; we review publications accumulated since its publication over a decade ago, and include earlier methods that are key for building light transport paths in a stochastic manner. We begin by describing analog and non‐analog procedures for free‐path sampling and discuss various expected‐value, collision, and track‐length estimators for computing transmittance. We then review the various rendering algorithms that employ these as building blocks for path sampling. Special attention is devoted to null‐collision methods that utilize fictitious matter to handle spatially varying densities; we import two “next‐flight” estimators originally developed in nuclear sciences. Whenever possible, we draw connections between image‐synthesis techniques and methods from particle physics and neutron transport to provide the reader with a broader context.
Procedural modeling allows to create highly complex 3D scenes from a small set of construction rules, which has several advantages over storing the full data of an object. The most important ones are a very small memory footprint and the ability to generate infinite variations of one prototype object by using the same set of rules. However, the problem that procedural modeling imposes on the user is to define a reasonable set of rules to generate a specific object. To simplify this task, we present new interaction metaphors for a graphical user interface and a minimal set of geometric operations that allow the user to efficiently create such rules and the respective models. These metaphors are then implemented in a prototype system and are evaluated by user tests with regard to usability and user performance. The results show that the system enables even inexperienced users to create complex 3D objects via procedural modeling using the presented approach.
We present a compact and efficient representation of spectra for accurate rendering using more than three dimensions. While tristimulus color spaces are sufficient for color display, a spectral renderer has to simulate light transport per wavelength. Consequently, emission spectra and surface albedos need to be known at each wavelength. It is practical to store dense samples for emission spectra but for albedo textures, the memory requirements of this approach are unreasonable. Prior works that approximate dense spectra from tristimulus data introduce strong errors under illuminants with sharp peaks and in indirect illumination. We represent spectra by an arbitrary number of Fourier coefficients. However, we do not use a common truncated Fourier series because its ringing could lead to albedos below zero or above one. Instead, we present a novel approach for reconstruction of bounded densities based on the theory of moments. The core of our technique is our bounded maximum entropy spectral estimate. It uses an efficient closed form to compute a smooth signal between zero and one that matches the given Fourier coefficients exactly. Still, a ground truth that localizes all of its mass around a few wavelengths can be reconstructed adequately. Therefore, our representation covers the full gamut of valid reflectances. The resulting textures are compact because each coefficient can be stored in 10 bits. For compatibility with existing tristimulus assets, we implement a mapping from tristimulus color spaces to three Fourier coefficients. Using three coefficients, our technique gives state of the art results without some of the drawbacks of related work. With four to eight coefficients, our representation is superior to all existing representations. Our focus is on offline rendering but we also demonstrate that the technique is fast enough for real-time rendering.
Abstract. Physically based image synthesis methods, a research direction in computer graphics (CG), are capable of simulating optical measuring systems in their entirety and thus constitute an interesting approach for the development, simulation, optimization, and validation of such systems. In addition, other CG methods, so-called procedural modeling techniques, can be used to quickly generate large sets of virtual samples and scenes thereof that comprise the same variety as physical testing objects and real scenes (e.g., if digitized sample data are not available or difficult to acquire). Appropriate image synthesis (rendering) techniques result in a realistic image formation for the virtual scenes, considering light sources, material, complex lens systems, and sensor properties, and can be used to evaluate and improve complex measuring systems and automated optical inspection (AOI) systems independent of a physical realization. In this paper, we provide an overview of suitable image synthesis methods and their characteristics, we discuss the challenges for the design and specification of a given measuring situation in order to allow for a reliable simulation and validation, and we describe an image generation pipeline suitable for the evaluation and optimization of measuring and AOI systems.
The last few years have seen a decisive move of the movie making industry towards rendering using physically-based methods, mostly implemented in terms of path tracing. While path tracing reached most VFX houses and animation studios at a time when a physically-based approach to rendering and especially material modelling was already firmly established, the new tools brought with them a whole new balance, and many new workflows have evolved to find a new equilibrium. Letting go of instincts based on hard-learned lessons from a previous time has been challenging for some, and many different takes on a practical deployment of the new technologies have emerged. While the language and toolkit available to the technical directors keep closing the gap between lighting in the real world and the light transport simulations ran in software, an understanding of the limitations of the simulation models and a good intuition of the tradeoffs and approximations at play are of fundamental importance to make efficient use of the available resources. In this course, the novel workflows emerged during the transitions at a number of large facilities are presented to a wide audience including technical directors, artists, and researchers.
The last few years have seen a decisive move of the movie making industry towards rendering using physically-based methods, mostly implemented in terms of path tracing. Increasing demands on the realism of lighting, rendering and material modeling, paired with a working paradigm that very naturally models the behaviour of light like in the real world mean that more and more movies each year are created the physically-based way. This shift has also been recently recognised by the Academy of Motion Picture Arts and Sciences, which in this year's SciTech ceremony has awarded three ray tracing renderers for their crucial contribution to this move. While the language and toolkit available to the technical directors get closer and closer to natural language, an understanding of the techniques and algorithms behind the workings of the renderer of choice are still of fundamental importance to make efficient use of the available resources, especially when the hard-learned lessons and tricks from the previous world of rasterization-based rendering can introduce confusion and cause costly mistakes. In this course, the architectures and novel possibilities of the next generation of production renderers are introduced to a wide audience including technical directors, artists, and researchers.
Abstract Emissive media are often challenging to render: in thin regions where only few scattering events occur the emission is poorly sampled, while sampling events for emission can be disadvantageous due to absorption in dense regions. We extend the standard path space measurement contribution to also collect emission along path segments, not only at vertices. We apply this extension to two estimators: extending paths via scattering and distance sampling, and next event estimation. In order to do so, we unify the two approaches and derive the corresponding Monte Carlo estimators to interpret next event estimation as a solid angle sampling technique. We avoid connecting paths to vertices hidden behind dense absorbing layers of smoke by also including transmittance sampling into next event estimation. We demonstrate the advantages of our line integration approach which generates estimators with lower variance since entire segments are accounted for. Also, our novel forward next event estimation technique yields faster run times compared to previous next event estimation as it penetrates less deeply into dense volumes.