雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Multi-sensor capture and network processing for virtual reality conferencing

Note: We don't have the ability to review paper

PubDate: June 2019

Teams: TNO

Writers: Sylvie Dijkstra-Soudarissanane;Karim El Assal;Simon Gunkel;Frank ter Haar;Rick Hindriks;Jan Willem Kleinrouweler;Omar Niamut

PDF: Multi-sensor capture and network processing for virtual reality conferencing

Abstract

Recent developments in key technologies like 5G, Augmented and Virtual Reality (AR/VR) and Tactile Internet result in new possibilities for communication. Particularly, these key digital technologies can enable remote communication and collaboration in remote experiences. In this demo, we work towards 6-degrees of freedom (DoF) photo-realistic shared experiences by introducing a multi-view multi-sensor capture end-to-end system. Our system acts as a baseline end-to-end system for capture, transmission and rendering of volumetric video of user representations. To handle multi-view video processing in a scalable way, we introduce a Multi-point Control Unit (MCU) to shift processing from end devices into the cloud. MCUs are commonly used to bridge videoconferencing connections, and we design and deploy a VR-ready MCU to reduce both upload bandwidth and end-device processing requirements. In our demo, we focus on a remote meeting use case where multiple people can sit around a table to communicate in a shared VR environment.

您可能还喜欢...

Paper