Understanding User Behavior in Volumetric Video Watching: Dataset, Analysis and Prediction
PubDate:Oct 2023
Teams: CUHK-Shenzhen;Simon Fraser University
Writers:Kaiyuan Hu,Haowen Yang,Yili Jin,Junhua Liu,Yongting Chen,Miao Zhang,Fangxin Wang
PDF:Understanding User Behavior in Volumetric Video Watching: Dataset, Analysis and Prediction
Abstract
Volumetric video emerges as a new attractive video paradigm in recent years since it provides an immersive and interactive 3D viewing experience with six degree-of-freedom (DoF). Unlike traditional 2D or panoramic videos, volumetric videos require dense point clouds, voxels, meshes, or huge neural models to depict volumetric scenes, which results in a prohibitively high bandwidth burden for video delivery. Users' behavior analysis, especially the viewport and gaze analysis, then plays a significant role in prioritizing the content streaming within users' viewport and degrading the remaining content to maximize user QoE with limited bandwidth. Although understanding user behavior is crucial, to the best of our best knowledge, there are no available 3D volumetric video viewing datasets containing fine-grained user interactivity features, not to mention further analysis and behavior prediction.