雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Progressive Frame Patching for FoV-based Point Cloud Video Streaming

Note: We don't have the ability to review paper

PubDate: Mar 2023

Teams: New York University

Writers: Tongyu Zong, Yixiang Mao, Chen Li, Yong Liu, Yao Wang

PDF: Progressive Frame Patching for FoV-based Point Cloud Video Streaming

Abstract

Immersive multimedia applications, such as Virtual, Augmented and Mixed Reality, have become more practical with advances in hardware and software for acquiring and rendering 3D media as well as 5G/6G wireless networks. Such applications require the delivery of volumetric video to users with six degrees of freedom (6-DoF) movements. Point Cloud has become a popular volumetric video format due to its flexibility and simplicity. A dense point cloud consumes much higher bandwidth than a 2D/360 degree video frame. User Field of View (FoV) is more dynamic with 6-DoF movement than 3-DoF movement. A user’s view quality of a 3D object is affected by points occlusion and distance, which are constantly changing with user and object movements. To save bandwidth, FoV-adaptive streaming predicts user FoV and only downloads the data falling in the predicted FoV, but it is vulnerable to FoV prediction errors, which is significant when a long buffer is used for smoothed streaming. In this work, we propose a multi-round progressive refinement framework for point cloud-based volumetric video streaming. Instead of sequentially downloading frames, we simultaneously downloads/patches multiple frames falling into a sliding time-window, leveraging on the scalability of point-cloud coding. The rate allocation among all tiles of active frames are solved analytically using the heterogeneous tile utility functions calibrated by the predicted user FoV. Multi-frame patching takes advantage of the streaming smoothness resulted from long buffer and the FoV prediction accuracy at short buffer length. We evaluate our solution using simulations driven by real point cloud videos, bandwidth traces and 6-DoF FoV traces of real users. The experiments show that our solution is robust against bandwidth/FoV prediction errors, and can deliver high and smooth quality in the face of bandwidth variations and dynamic user movements.

您可能还喜欢...

Paper