Egocentric Human Segmentation for Mixed Reality
PubDate: May 2020
Teams: Universidad Autonoma de Madrid, Nokia Bell-Labs, Universidad Autonoma de Madrid
Writers: Andrija Gajic, Ester Gonzalez-Sosa, Diego Gonzalez-Morin, Marcos Escudero-Viñolo, Alvaro Villegas
PDF: Egocentric Human Segmentation for Mixed Reality
Abstract
The objective of this work is to segment human body parts from egocentric video using semantic segmentation networks. Our contribution is two-fold: i) we create a semi-synthetic dataset composed of more than 15, 000 realistic images and associated pixel-wise labels of egocentric human body parts, such as arms or legs including different demographic factors; ii) building upon the ThunderNet architecture, we implement a deep learning semantic segmentation algorithm that is able to perform beyond real-time requirements (16 ms for 720 x 720 images). It is believed that this method will enhance sense of presence of Virtual Environments and will constitute a more realistic solution to the standard virtual avatars.