空 挡 广 告 位 | 空 挡 广 告 位

GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis

Note: We don't have the ability to review paper

PubDate: Jan 2023

Teams: Zhejiang University;Bytedance

Writers: Zhenhui Ye, Ziyue Jiang, Yi Ren, Jinglin Liu, JinZheng He, Zhou Zhao

PDF: GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis

Abstract

Generating photo-realistic video portrait with arbitrary speech audio is a crucial problem in film-making and virtual reality. Recently, several works explore the usage of neural radiance field in this task to improve 3D realness and image fidelity. However, the generalizability of previous NeRF-based methods to out-of-domain audio is limited by the small scale of training data. In this work, we propose GeneFace, a generalized and high-fidelity NeRF-based talking face generation method, which can generate natural results corresponding to various out-of-domain audio. Specifically, we learn a variaitional motion generator on a large lip-reading corpus, and introduce a domain adaptative post-net to calibrate the result. Moreover, we learn a NeRF-based renderer conditioned on the predicted facial motion. A head-aware torso-NeRF is proposed to eliminate the head-torso separation problem. Extensive experiments show that our method achieves more generalized and high-fidelity talking face generation compared to previous methods.

您可能还喜欢...

Paper