空 挡 广 告 位 | 空 挡 广 告 位

Geometric correspondence fields: Learned differentiable rendering for 3D pose refinement in the wild

Note: We don't have the ability to review paper

PubDate: Aug 2020

Teams: Graz University of Technology; Facebook Inc

Writers: Alexander Grabner, Yaming Wang, Peizhao Zhang, Peihong Guo, Tong Xiao, Peter Vajda, Peter M. Roth, Vincent Lepetit

PDF: Geometric correspondence fields: Learned differentiable rendering for 3D pose refinement in the wild

Abstract

We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild. In contrast to previous methods, we make two main contributions: First, instead of comparing real-world images and synthetic renderings in the RGB or mask space, we compare them in a feature space optimized for 3D pose refinement. Second, we introduce a novel differentiable renderer that learns to approximate the rasterization backward pass from data instead of relying on a hand-crafted algorithm. For this purpose, we predict deep cross-domain correspondences between RGB images and 3D model renderings in the form of what we call geometric correspondence fields. These correspondence fields serve as pixel-level gradients which are analytically propagated backward through the rendering pipeline to perform a gradient-based optimization directly on the 3D pose. In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates. We evaluate our approach on the challenging Pix3D data set and achieve up to 55 percent relative improvement compared to state-of-the-art refinement methods in multiple metrics.

您可能还喜欢...

Paper