空 挡 广 告 位 | 空 挡 广 告 位

DenseRaC: Joint 3D Pose and Shape Estimation by Dense Render-and-Compare

Note: We don't have the ability to review paper

Title: DenseRaC: Joint 3D Pose and Shape Estimation by Dense Render-and-Compare

Teams: Facebook

Writers: Yuanlu Xu, Song-Chun Zhu, Tony Tung

Publication date: October 28, 2019

Abstract

We present DenseRaC, a novel end-to-end framework for jointly estimating 3D human pose and body shape from a monocular RGB image. Our two-step framework takes the body pixel-to-surface correspondence map (i.e., IUV map) as proxy representation and then performs estimation of parameterized human pose and shape. Specifically, given an estimated IUV map, we develop a deep neural network optimizing 3D body reconstruction losses and further integrating a render-and-compare scheme to minimize differences between the input and the rendered output, i.e., dense body landmarks, body part masks, and adversarial priors. To boost learning, we further construct a large-scale synthetic dataset (MOCA) utilizing web-crawled Mocap sequences, 3D scans and animations. The generated data covers diversified camera views, human actions and body shapes, and is paired with full ground truth. Our model jointly learns to represent the 3D human body from hybrid datasets, mitigating the problem of unpaired training data. Our experiments show that DenseRaC obtains superior performance against state of the art on public benchmarks of various human-related tasks.

您可能还喜欢...

Paper