空 挡 广 告 位 | 空 挡 广 告 位

Expression-aware video inpainting for HMD removal in XR applications

Note: We don't have the ability to review paper

Date:Jan 2024

Teams:Technische Universität Berlin;Tampere University;Ernst-Abbe University of Applied

Writers:Fatemeh Ghorbani Lohesara, Karen Egiazarian, Sebastian Knorr

PDF:Expression-aware video inpainting for HMD removal in XR applications

Abstract

Head-mounted displays (HMDs) serve as indispensable devices for observing extended reality (XR) environments and virtual content. However, HMDs present an obstacle to external recording techniques as they block the upper face of the user. This limitation significantly affects social XR applications, specifically teleconferencing, where facial features and eye gaze information play a vital role in creating an immersive user experience. In this study, we propose a new network for expression-aware video inpainting for HMD removal (EVI-HRnet) based on generative adversarial networks (GANs). Our model effectively fills in missing information with regard to facial landmarks and a single occlusion-free reference image of the user. The framework and its components ensure the preservation of the user’s identity across frames using the reference frame. To further improve the level of realism of the inpainted output, we introduce a novel facial expression recognition (FER) loss function for emotion preservation. Our results demonstrate the remarkable capability of the proposed framework to remove HMDs from facial videos while maintaining the subject’s facial expression and identity. Moreover, the outputs exhibit temporal consistency along the inpainted frames. This lightweight framework presents a practical approach for HMD occlusion removal, with the potential to enhance various collaborative XR applications without the need for additional hardware.

您可能还喜欢...

Paper