An efficient image-based rendering method
2004
Given a set of images of the same scene, we propose an efficient method for realistically synthesizing a new image seen from a new viewpoint. Compared with some existing techniques which explicitly reconstruct the 3D geometry of the scene, we directly reconstruct the colour of each pixel in the new image by utilizing a weighted photoconsistency constraint. Global photoconsistency constraint of one pixel is firstly utilized to generate a list of plausible colours for each rendered pixel in the new image. Considering the existing of partial occlusion or the deficiencies in the image-formation model, we iteratively update the colour for each rendered pixel based on local texture statistics similar to the input images. Experimental results on the generation of a new image from a new viewing position from a set of input images show that our proposed method is promising and satisfactory.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
10
References
0
Citations
NaN
KQI