Output-Sensitive Avatar Representations for Immersive Telepresence
PubDate: November 2020
Teams: Bauhaus-Universität Weimar
Writers: Adrian Kreskowski; Stephan Beck; Bernd Froehlich
PDF: Output-Sensitive Avatar Representations for Immersive Telepresence
Abstract
In this article, we propose a system design and implementation for output-sensitive reconstruction, transmission and rendering of 3D video avatars in distributed virtual environments. In our immersive telepresence system, users are captured by multiple RGBD sensors connected to a server that performs geometry reconstruction based on viewing feedback from remote telepresence parties. This feedback and reconstruction loop enables visibility-aware level-of-detail reconstruction of video avatars regarding geometry and texture data, and considers individual and groups of collocated users. Our evaluation reveals that our approach leads to a significant reduction of reconstruction times, network bandwidth requirements and round-trip times as well as rendering costs in many situations.