空 挡 广 告 位 | 空 挡 广 告 位

IGNOR: Image-guided Neural Object Rendering

Note: We don't have the ability to review paper

PubDate: Jan 2020

Teams: 1Technical University of Munich, 2Stanford University, 3Max-Planck-Institute for Informatics,4University of Erlangen-Nuremberg

Writers: Justus Thies1, Michael Zollhofer ¨2, Christian Theobalt3, Marc Stamminger4, Matthias Nießner1

PDF: IGNOR: Image-guided Neural Object Rendering

Abstract

We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis. The goal of our method is to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications (e.g., virtual showrooms, virtual tours \& sightseeing, the digital inspection of historical artifacts). A core component of our work is the handling of view-dependent effects. Specifically, we directly train an object-specific deep neural network to synthesize the view-dependent appearance of an object. As input data we are using an RGB video of the object. This video is used to reconstruct a proxy geometry of the object via multi-view stereo. Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering. This warping assumes diffuse surfaces, in case of view-dependent effects, such as specular highlights, it leads to artifacts. To this end, we propose EffectsNet, a deep neural network that predicts view-dependent effects. Based on these estimations, we are able to convert observed images to diffuse images. These diffuse images can be projected into other views. In the target view, our pipeline reinserts the new view-dependent effects. To composite multiple reprojected images to a final output, we learn a composition network that outputs photo-realistic results. Using this image-guided approach, the network does not have to allocate capacity on ``remembering” object appearance, instead it learns how to combine the appearance of captured images. We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data.

您可能还喜欢...

Paper