FLAG: Flow-based 3D Avatar Generation from Sparse Observations

Note: We don't have the ability to review paper

PubDate: June 2022

Teams: Mixed Reality & AI Lab – Cambridge

Writers: Sadegh Aliakbarian, Pashmina Cameron, Federica Bogo, Andrew Fitzgibbon, Tom Cashman

PDF: FLAG: Flow-based 3D Avatar Generation from Sparse Observations

Project: FLAG: Flow-based 3D Avatar Generation from Sparse Observations

Abstract

To represent people in mixed reality applications for collaboration and communication, we need to generate realistic and faithful avatar poses. However, the signal streams that can be applied for this task from head-mounted devices (HMDs) are typically limited to head pose and hand pose estimates. While these signals are valuable, they are an incomplete representation of the human body, making it challenging to generate a faithful full-body avatar. We address this challenge by developing a flow-based generative model of the 3D human body from sparse observations, wherein we learn not only a conditional distribution of 3D human pose, but also a probabilistic mapping from observations to the latent space from which we can generate a plausible pose along with uncertainty estimates for the joints. We show that our approach is not only a strong predictive model, but can also act as an efficient pose prior in different optimization settings where a good initial latent code plays a major role.

You may also like...

Paper