雨果巴拉:行业北极星Vision Pro过度设计不适合市场

High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

Note: We don't have the ability to review paper

PubDate: Mar 2021

Teams: Facebook Reality Labs;Univeristy of Rochester

Writers: Lele Chen, Chen Cao, Fernando De la Torre, Jason Saragih, Chenliang Xu, Yaser Sheikh

PDF: High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

Abstract

3D video avatars can empower virtual communications by providing compression, privacy, entertainment, and a sense of presence in AR/VR. Best 3D photo-realistic AR/VR avatars driven by video, that can minimize uncanny effects, rely on person-specific models. However, existing person-specific photo-realistic 3D models are not robust to lighting, hence their results typically miss subtle facial behaviors and cause artifacts in the avatar. This is a major drawback for the scalability of these models in communication systems (e.g., Messenger, Skype, FaceTime) and AR/VR. This paper addresses previous limitations by learning a deep learning lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar. Extensive experimental validation and comparisons to other state-of-the-art methods demonstrate the effectiveness of the proposed framework in real-world scenarios with variability in pose, expression, and illumination. Please visit this https URL for more results. Our project page can be found at this https URL.

您可能还喜欢...

Paper