空 挡 广 告 位 | 空 挡 广 告 位

Disentangled Clothed Avatar Generation with Layered Representation

Note: We don't have the ability to review paper

PubDate: Jan 2025

Teams:1Shanghai Jiao Tong University 2The University of Hong Kong

Writers:Weitian Zhang, Sijing Wu, Manwen Liao, Yichao Yan

PDF:Disentangled Clothed Avatar Generation with Layered Representation

Abstract

Clothed avatar generation has wide applications in virtual and augmented reality, filmmaking, and more. Previous methods have achieved success in generating diverse digital avatars, however, generating avatars with disentangled components (\eg, body, hair, and clothes) has long been a challenge. In this paper, we propose LayerAvatar, the first feed-forward diffusion-based method for generating component-disentangled clothed avatars. To achieve this, we first propose a layered UV feature plane representation, where components are distributed in different layers of the Gaussian-based UV feature plane with corresponding semantic labels. This representation supports high-resolution and real-time rendering, as well as expressive animation including controllable gestures and facial expressions. Based on the well-designed representation, we train a single-stage diffusion model and introduce constrain terms to address the severe occlusion problem of the innermost human body layer. Extensive experiments demonstrate the impressive performances of our method in generating disentangled clothed avatars, and we further explore its applications in component transfer. The project page is available at: this https URL

您可能还喜欢...

Paper