空 挡 广 告 位 | 空 挡 广 告 位

Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction

Note: We don't have the ability to review paper

PubDate: August 24, 2020

Teams: University of North Carolina at Chapel Hill;TU Dortmund University;Facebook Reality Labs

Writers: Roham Chabra, Jan E. Lenssen, Eddy Ilg, Tanner Schmidt, Julian Straub, Steven Lovegrove, Richard Newcombe

PDF: Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction

Abstract

Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF. Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood. This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference. We demonstrate the effectiveness and generalization power of DeepLS by showing object shape encoding and reconstructions of full scenes, where DeepLS delivers high compression, accuracy, and local shape completion.

您可能还喜欢...

Paper