雨果巴拉:行业北极星Vision Pro过度设计不适合市场

SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans

Note: We don't have the ability to review paper

PubDate: Apr 2021

Teams: Technical University of Munich 2Google

Writers: Angela Dai, Yawar Siddiqui, Justus Thies, Julien Valentin, Matthias Nießner

PDF: SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans

Abstract

We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion. Our self-supervised approach learns to jointly inpaint geometry and color by correlating an incomplete RGB-D scan with a more complete version of that scan. Notably, rather than relying on 3D reconstruction losses to inform our 3D geometry and color reconstruction, we propose adversarial and perceptual losses operating on 2D renderings in order to achieve high-resolution, high-quality colored reconstructions of scenes. This exploits the high-resolution, self-consistent signal from individual raw RGB-D frames, in contrast to fused 3D reconstructions of the frames which exhibit inconsistencies from view-dependent effects, such as color balancing or pose inconsistencies. Thus, by informing our 3D scene generation directly through 2D signal, we produce high-quality colored reconstructions of 3D scenes, outperforming state of the art on both synthetic and real data.

您可能还喜欢...

Paper