Automatic Generation of Dynamically Relightable Virtual Objects with Consumer-Grade Depth Cameras

Note: We don't have the ability to review paper

PubDate: August 2019

Teams: University of Southern California;University of Minnesota

Writers: Chih-Fan Chen; Evan Suma Rosenberg

PDF: Automatic Generation of Dynamically Relightable Virtual Objects with Consumer-Grade Depth Cameras

Abstract

This research demo showcases the results of novel approach for estimating the illumination and reflectance properties of virtual objects captured using consumer-grade RGB-D cameras. This method is implemented within a fully automatic content creation pipeline that generates photo realistic objects in real-time virtual reality scenes with dynamic lighting. The geometry of the target object is first reconstructed from depth images captured using a handheld camera. To get nearly drift-free texture maps of the virtual object, a set of selected images from the original color stream is used for camera pose optimization. Our approach further separates these images into diffuse (view-independent) and specular (view-dependent) components using low-rank decomposition. The lighting conditions during capture and reflectance properties of the virtual object are subsequently estimated from the specular maps. By combining these parameters with the diffuse texture, reconstructed objects are then rendered in a real-time virtual reality demo that plausibly replicates the real world illumination and showcases dynamic lighting with varying direction, intensity, and color.

You may also like...

Paper