空 挡 广 告 位 | 空 挡 广 告 位

Facial Expression Re-targeting from a Single Character

Note: We don't have the ability to review paper

PubDate: June 2023

Teams: Huawei Research

Writers: Ariel Larey, Omri Asraf, Adam Kelder, Itzik Wilf, Ofer Kruzel, Nati Daniel

PDF: Facial Expression Re-targeting from a Single Character

Abstract

Video retargeting for digital face animation is used in virtual reality, social media, gaming, movies, and video conference, aiming to animate avatars’ facial expressions based on videos of human faces. The standard method to represent facial expressions for 3D characters is by blendshapes, a vector of weights representing the avatar’s neutral shape and its variations under facial expressions, e.g., smile, puff, blinking. Datasets of paired frames with blendshape vectors are rare, and labeling can be laborious, time-consuming, and subjective. In this work, we developed an approach that handles the lack of appropriate datasets. Instead, we used a synthetic dataset of only one character. To generalize various characters, we re-represented each frame to face landmarks. We developed a unique deep-learning architecture that groups landmarks for each facial organ and connects them to relevant blendshape weights. Additionally, we incorporated complementary methods for facial expressions that landmarks did not represent well and gave special attention to eye expressions. We have demonstrated the superiority of our approach to previous research in qualitative and quantitative metrics. Our approach achieved a higher MOS of 68% and a lower MSE of 44.2% when tested on videos with various users and expressions.

您可能还喜欢...

Paper