雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Mutual Scene Synthesis for Mixed Reality Telepresence

Note: We don't have the ability to review paper

PubDate: Apr 2022

Teams: Reality Labs Research;University of California

Writers: Mohammad Keshavarzi, Michael Zollhoefer, Allen Y. Yang, Patrick Peluse, Luisa Caldas

PDF: Mutual Scene Synthesis for Mixed Reality Telepresence

Abstract

Remote telepresence via next-generation mixed reality platforms can provide higher levels of immersion for computer-mediated communications, allowing participants to engage in a wide spectrum of activities, previously not possible in 2D screen-based communication methods. However, as mixed reality experiences are limited to the local physical surrounding of each user, finding a common virtual ground where users can freely move and interact with each other is challenging. In this paper, we propose a novel mutual scene synthesis method that takes the participants’ spaces as input, and generates a virtual synthetic scene that corresponds to the functional features of all participants’ local spaces. Our method combines a mutual function optimization module with a deep-learning conditional scene augmentation process to generate a scene mutually and physically accessible to all participants of a mixed reality telepresence scenario. The synthesized scene can hold mutual walkable, sittable and workable functions, all corresponding to physical objects in the users’ real environments. We perform experiments using the MatterPort3D dataset and conduct comparative user studies to evaluate the effectiveness of our system. Our results show that our proposed approach can be a promising research direction for facilitating contextualized telepresence systems for next-generation spatial computing platforms.

您可能还喜欢...

Paper