Real-Time Facial Expression Driving based on 3D Facial Feature Point

Note: We don't have the ability to review paper

PubDate: February 2022

Teams: Sichuan University

Writers: Huaibo Zhang; Gen Li

PDF: Real-Time Facial Expression Driving based on 3D Facial Feature Point

Abstract

Facial expression driving can make the expression of virtual characters more real and natural. It is widely used in movies, games and some social software. The existing real-time facial expression driving algorithms have the limitation of physical hardware, prefabricated model or long-time training. In order to get rid of the limitation of existing algorithms, this paper proposes a real-time expression driving algorithm based on 3D facial feature points with RGBD data as input. In the face capture, we use ICP algorithm to get the rigid data of the face and deformation transmission algorithm to capture the non-rigid data of the face. Moreover, the whole face process only takes 0.4ms to complete. Because the calculation of the algorithm is based on 3D feature points of the face, there is no need to prefabricate a specific face model and a lot of time training. Our algorithm can not only use real faces to drive virtual faces, but also use virtual characters to drive virtual faces.

You may also like...

Paper