Acquiring shell textures from a single image for realistic fur rendering
0
Citation
1
Reference
10
Related Paper
Abstract:
To synthesize a realistic appearance of mammals, it is necessary to express disorderly lie of hairs. "Shell texturing method", proposed by Lengyel [2001], is possible to synthesize realistic fur appearance over arbitrary surfaces in real-time. Prior to rendering, it is necessary to prepare several shell textures as a pre-process. However, acquiring appropriate shell textures is a complicated and time consuming work. In this paper, we present a novel method to acquire shell textures only from a single input picture of an actual animal fur. Since every shell textures are automatically computed by a pixel shader in run-time, it is not necessary any complicated pre-computation. Furthermore, conventional shell texturing method employs typically 16 textures which require huge graphics memory. Because our method requires only a single texture, we realize a significant reduction in memory usage for practical purpose.Keywords:
Shader
Texture memory
Texture Synthesis
We present a method for optimizing the reconstruction and rendering of 3D objects from multiple images by utilizing the latest features of consumer-level graphics hardware based on shader model 4.0. We accelerate visual hull reconstruction by rewriting a shape-from-silhouette algorithm to execute on the GPU's parallel architecture. Rendering is optimized through the application of geometry shaders to generate billboarding micr facets textured with captured images. We also present a method for handling occlusion in the camera selection process that is optimized for execution on the GPU. Execution time is further improved by rendering intermediate results directly to texture to minimize the number of data transfers between graphics and main memory. We show our GPU based system to be significantly more efficient than a purely CPUbased approach, due to the parallel nature of the GPU, while maintaining graphical quality.
Shader
Texture memory
Real-time rendering
Visual hull
Cite
Citations (8)
The bidirectional texture function (BTF) is a 6D function that describes the appearance of a real-world surface as a function of lighting and viewing directions. The BTF can model the fine-scale shadows, occlusions, and specularities caused by surface mesostructures. We present algorithms for efficient synthesis of BTFs on arbitrary surfaces and for hardware-accelerated rendering. For both synthesis and rendering, a main challenge is handling the large amount of data in a BTF sample. To addresses this challenge, we approximate the BTF sample by a small number of 4D point appearance functions (PAFs) multiplied by 2D geometry maps. The geometry maps and PAFs lead to efficient synthesis and fast rendering of BTFs on arbitrary surfaces. For synthesis, a surface BTF can be generated by applying a texton-based synthesis algorithm to a small set of 2D geometry maps while leaving the companion 4D PAFs untouched. As for rendering, a surface BTF synthesized using geometry maps is well-suited for leveraging the programmable vertex and pixel shaders on the graphics hardware. We present a real-time BTF rendering algorithm that runs at the speed of about 30 frames/second on a mid-level PC with an ATI Radeon 8500 graphics card. We demonstrate the effectiveness of our synthesis and rendering algorithms using both real and synthetic BTF samples.
Shader
Texture memory
Real-time rendering
Tiled rendering
3D rendering
Graphics hardware
Alternate frame rendering
Parallel rendering
Cite
Citations (54)
We present a practical approach for implementing the projected tetrahedra (PT) algorithm for interactive volume rendering of unstructured data using programmable graphics cards. Unlike similar works reported earlier, our method employs two fragment shaders, one for computing the tetrahedra projections and another for rendering the elements. We achieve interactive rates by storing the model in texture memory and avoiding redundant projections of implementations using vertex shaders. Our algorithm is capable of rendering over 2.0 M Tet/s on current graphics hardware, making it competitive with recent ray-casting approaches, while occupying a substantially smaller memory footprint
Shader
Texture memory
Graphics hardware
Real-time rendering
Tiled rendering
Parallel rendering
Memory footprint
Alternate frame rendering
Cite
Citations (15)
While texture synthesis on surfaces has received much attention in computer graphics, the ideal solution that quickly produces high-quality textures with little user intervention has remained elusive. The algorithm presented in this paper brings us closer to that goal by generating high-quality textures on arbitrary meshes in a matter of seconds. It achieves that by separating texture preprocessing from texture synthesis and accelerating the candidate search process. The result of this is a mapping of every triangle in a mesh to the original texture sample with no need for additional texture memory. The whole process is fully automatic, yet still user controllable. It also places no special restrictions on the mesh or on the texture, and the original mesh is not modified in any way. A preprocessed texture sample can be used to synthesize a texture map on any number of meshes.
Texture atlas
Texture filtering
Texture compression
Texture Synthesis
Texture (cosmology)
Projective texture mapping
Texture memory
Triangle mesh
T-vertices
Cite
Citations (33)
In this paper, we propose a practical method for hardware-accelerated rendering of the depth image-based representation(DIBR) of 3D graphic object using graphic processing unit(GPU). The proposed method overcomes the drawbacks of the conventional rendering, i.e. it is slow since it is hardly assisted by graphics hardware and surface lighting is static. Utilizing the new features of modem GPU and programmable shader support, we develop an efficient hardware-accelerating rendering algorithm of depth image-based 3D object. Surface rendering in response of varying illumination is performed inside the vertex shader while adaptive point splatting is performed inside the fragment shader. Experimental results show that the rendering speed increases considerably compared with the software-based rendering and the conventional OpenGL-based rendering method.
Shader
OpenGL
Tiled rendering
Real-time rendering
3D rendering
Alternate frame rendering
Parallel rendering
Texture memory
Graphics hardware
Graphics processing unit
Cite
Citations (0)
We present a novel technique for rendering large-scale volume datasets interactively on general purpose PC hardware. To circumvent the limited texture memory for texture based volume rendering, the dataset is partitioned into the bricks with reasonable size. The bricks are loaded to the graphics hardware dynamically and rendered using 3D texture mapping. During the rendering only one brick resides on the texture memory. Additionally, the sophisticated PC graphics hardware functionality is utilized to estimate the gradient on the fly avoiding the huge memory consumption in previous approaches. Using a prototype implementation of the algorithm, we are able to perform fast data loading and interactive visualization for the large datasets on a single standard PC
Texture memory
Graphics hardware
3D rendering
Parallel rendering
Tiled rendering
Texture atlas
Cite
Citations (1)
There has been a profound interest of late in the digitization and reconstruction of historical monuments. Rendering massive monument models requires a cluster of workstations because of the computational infeasibility of rendering over a single machine. Moreover, interactive rendering of these massive models in an immersive environment is only possible over a cluster of machines. In this paper, we present a design of distributed rendering system to efficiently handle models with massive textures. A server holds the skeleton of the whole model and divides the screen space balancing the rendering load among multiple clients. Each client loads only the required geometry and textures to render its sub-scene. We present a virtual texturing method for handling massive textures over the distributed rendering system. These textures are combined into a texture atlas which is split into equally sized tiles. A virtual texture is built over this atlas with each pixel representing a tile in the atlas. An efficient caching module loads only the required tiles into the memory, that are identified using the virtual texture. A fragment shader uses the virtual texture as a mapping to the physical texture in memory to generate the fragments. We demonstrate the performance of our system over a 350M triangles and 500 gigapixel textured model of Vittala temple.
Texture memory
Tiled rendering
Real-time rendering
Shader
Parallel rendering
Cite
Citations (0)
The programmable vertex shader and fragment shader are the sign of the new generation graphic processing unit(GPU) that brings new special effects and increases the quality of image.Also,the programmable graphics hardware expands the algorithms of direct volume rendering;new rendering algorithms based on texture are presented.These algorithms use hardware accelerated 3D texture and fragment shader to implement volume rendering.
Shader
Texture memory
Tiled rendering
Real-time rendering
3D rendering
Graphics hardware
Graphics processing unit
Cite
Citations (0)
In this paper, we present a method for rendering deformations as part of the programmable shader pipeline of contemporary Graphical Processing Units. In our method, we allow general deformations including cuts. Previous approaches to deformation place the role of the GPU as a general purpose processor for computing vertex displacement. With the advent of vertex texture fetch in current GPUs, a number of approaches have been proposed to integrate deformation into the rendering pipeline. However, the rendering of cuts cannot be easily programmed into a vertex shader, due to the inability to change the topology of the mesh. Furthermore, rendering smooth deformed surfaces requires a fine tessellation of the mesh, in order to prevent self-intersection and meshing artifacts for large deformations. In our approach, we overcome these problems by considering deformation as a part of the pixel shader, where transformation is performed on a per-pixel basis. We demonstrate how this approach can be efficiently implemented using contemporary graphics hardware to obtain high-quality rendering of deformation at interactive rates.
Shader
Texture memory
Real-time rendering
Tiled rendering
Graphics hardware
OpenGL
3D rendering
Cite
Citations (17)
A novel technique is presented for rendering large-scale volume datasets interactively on general purpose PC hardware.To circumvent the limited texture memory for texture based volume rendering,the dataset is partitioned into the bricks with reasonable size.The bricks are loaded to the graphics hardware dynamically and rendered using 3D texture mapping.During the rendering only one brick resides on the texture memory.Additionally,the sophisticated PC graphics hardware functionality is utilized to estimate the gradient on the fly avoiding the huge memory consumption in previous approaches.Using a prototype implementation of the algorithm,we are able to perform fast data loading and interactive visualization for the large datasets on a single standard PC.
Texture memory
Graphics hardware
3D rendering
Texture atlas
Cite
Citations (0)