Trajectory-Based Viewport Prediction for 360-Degree Virtual Reality Videos
PubDate: January 2019
Teams: Adobe Research
Writers: Stefano Petrangeli; Gwendal Simon; Viswanathan Swaminathan
Viewport-based adaptive streaming has emerged as the main technique to efficiently stream bandwidth-intensive 360° videos over the best-effort Internet. In viewport-based streaming, only the portion of the video watched by the user is usually streamed at the highest quality, by either using video tiling, foveat-based encoding or similar approaches. To release the full potential of these approaches though, the future position of the user viewport has to be predicted. Indeed, accurate viewport prediction is necessary to minimize quality transitions while the user moves. Current solutions mainly focus on short-term prediction horizons (e.g., less than 2 s), while long-term viewport prediction has received less attention. This paper presents a novel prediction algorithm for the long-term prediction of the user viewport. In the proposed algorithm, the viewport evolution over time of a given user is modeled as a trajectory in the roll, pitch, and yaw angles domain. For a given video, a function is extrapolated to model the evolution of the three aforementioned angles over time, based on the viewing patterns of past users in the system. Moreover, trajectories that exhibit similar viewing behaviors are clustered together, and a different function is calculated for each cluster. The pre-computed functions are subsequently used at run-time to predict the future viewport position of a new user in the system, for the specific video. Preliminary results using a public dataset composed of 16 videos watched on average by 61 users show how the proposed algorithm can increase the predicted viewport area by 13% on average compared to several benchmarking heuristics, for prediction horizons up to 10 seconds.