空 挡 广 告 位 | 空 挡 广 告 位

Analyzing viewport prediction under different VR interactions

Note: We don't have the ability to review paper

PubDate: December 2019

Teams: AT&T Labs Research,University of Minnesota

Writers: Tan Xu, Bo Han,Feng Qian

PDF: Analyzing viewport prediction under different VR interactions

Abstract

In this paper, we study the problem of predicting a user’s viewport movement in a networked VR system (i.e., predicting which direction the viewer will look at shortly). This critical knowledge will guide the VR system through making judicious content fetching decisions, leading to efficient network bandwidth utilization (e.g., up to 35% on LTE networks as demonstrated by our previous work) and improved Quality of Experience (QoE). For this study, we collect viewport trajectory traces from 275 users who have watched popular 360° panoramic videos for a total duration of 156 hours. Leveraging our unique datasets, we compare viewport movement patterns of different interaction modes: wearing a head-mounted device, tilting a smartphone, and dragging the mouse on a PC. We then apply diverse machine learning algorithms - from simple regression to sophisticated deep learning that leverages crowd-sourced data - to analyze the performance of viewport prediction. We find that the deep learning approach is robust for all interaction modes and yields supreme performance, especially when the viewport is more challenging to predict, e.g., for a longer prediction window, or with a more dynamic movement. Overall, our analysis provides key insights on how to intelligently perform viewport prediction in networked VR systems.

您可能还喜欢...

Paper