Pixel-aligned Volumetric Avatars
PubDate: June 2021
Teams: Georgia Institute of Technology 2 Facebook Reality Labs Research
Writers: Amit Raj1 Michael Zollhofer ¨2 Tomas Simon 2Jason Saragih 2Shunsuke Saito 2James Hays 1 Stephen Lombardi 2
PDF: Pixel-aligned Volumetric Avatars
Abstract
Acquisition and rendering of photo-realistic human heads is a highly challenging research problem of particular importance for virtual telepresence. Currently, the highest quality is achieved by volumetric approaches trained in a person-specific manner on multi-view data. These models better represent fine structure, such as hair, compared to simpler mesh-based models. Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters. While such architectures achieve impressive rendering quality, they can not easily be extended to the multiidentity setting. In this paper, we devise a novel approach for predicting volumetric avatars of the human head given just a small number of inputs. We enable generalization across identities by a novel parameterization that combines neural radiance fields with local, pixel-aligned features extracted directly from the inputs, thus side-stepping the need for very deep or complex networks. Our approach is trained in an end-to-end manner solely based on a photometric rerendering loss without requiring explicit 3D supervision. We demonstrate that our approach outperforms the existing state of the art in terms of quality and is able to generate faithful facial expressions in a multi-identity setting.