空 挡 广 告 位 | 空 挡 广 告 位

gDNA: Towards Generative Detailed Neural Avatars

Note: We don't have the ability to review paper

PubDate: March 2022

Teams: ETH Zürich, 2University of Tübingen,3Max Planck Institute for Intelligent Systems, Tübingen

Writers: Xu Chen1,3, Tianjian Jiang1, Jie Song1, Jinlong Yang3, Michael J. Black3, Andreas Geiger2,3, Otmar Hilliges1

PDF: gDNA: Towards Generative Detailed Neural Avatars

Abstract

To make 3D human avatars widely available, we must be able to generate a variety of 3D virtual humans with varied identities and shapes in arbitrary poses. This task is challenging due to the diversity of clothed body shapes, their complex articulations, and the resulting rich, yet stochastic geometric detail in clothing. Hence, current methods to represent 3D people do not provide a full generative model of people in clothing. In this paper, we propose a novel method that learns to generate detailed 3D shapes of people in a variety of garments with corresponding skinning weights. Specifically, we devise a multi-subject forward skinning module that is learned from only a few posed, un-rigged scans per subject. To capture the stochastic nature of high-frequency details in garments, we leverage an adversarial loss formulation that encourages the model to capture the underlying statistics. We provide empirical evidence that this leads to realistic generation of local details such as clothing wrinkles. We show that our model is able to generate natural human avatars wearing diverse and detailed clothing. Furthermore, we show that our method can be used

您可能还喜欢...

Paper