空 挡 广 告 位 | 空 挡 广 告 位

Neural RGB-D Surface Reconstruction

Note: We don't have the ability to review paper

PubDate: Sep 2022

Teams:  Technical University of Munich;google

Writers: Dejan Azinović; Ricardo Martin-Brualla; Dan B Goldman; Matthias Nießner; Justus Thies

PDF:Neural RGB-D Surface Reconstruction

Abstract

Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR. These range from mixed reality applications for teleconferencing, virtual measuring, virtual room planing, to robotic applications. While current volume-based view synthesis methods that use neural radiance fields (NeRFs) show promising results in reproducing the appearance of an object or scene, they do not reconstruct an actual surface. The volumetric representation of the surface based on densities leads to artifacts when a surface is extracted using Marching Cubes, since during optimization, densities are accumulated along the ray and are not used at a single sample point in isolation. Instead of this volumetric representation of the surface, we propose to represent the surface using an implicit function (truncated signed distance function). We show how to incorporate this representation in the NeRF framework, and extend it to use depth measurements from a commodity RGB-D sensor, such as a Kinect. In addition, we propose a pose and camera re-finement technique which improves the overall reconstruction quality. In contrast to concurrent work on integrating depth priors in NeRF which concentrates on novel view synthesis, our approach is able to reconstruct high-quality, metrical 3D reconstructions.

您可能还喜欢...

Paper