Buffer-Aware Virtual Reality Video Streaming With Personalized and Private Viewport Prediction

Note: We don't have the ability to review paper

PubDate: October 2021

Teams: Beijing University of Posts and Telecommunications;Carleton University

Writers: Ran Zhang; Jiang Liu; Fangqi Liu; Tao Huang; Qinqin Tang; Shangguang Wang; F. Richard Yu

PDF: Buffer-Aware Virtual Reality Video Streaming With Personalized and Private Viewport Prediction

Abstract

Viewport prediction and prefetch have an important influence on VR video streaming performance. This work proposes a novel federated learning-based viewport prediction model training algorithm, ComPer-FedAvg. The proposed algorithm leverages a VR video’s common viewing pattern and users’ personal viewing patterns to train the prediction model in a distributed and privacy-preserving manner. Further, considering the VR video viewport prediction accuracy, a stochastic game is formulated to solve the VR streaming network’s communication resource allocation problem, where limited communication resource blocks are auctioned to users to achieve the optimal overall VR viewing experience. For each user, the auction is decomposed into two disjoint subproblems, namely, the optimal number of data rate requesting and true value claiming (bidding). The optimal true value claiming has been analytically proved to be equal to the VR viewing reward with given data rate. Due to the lack of global information when users request data rate, we reformulate users’ data rate requesting problem as a POMDP problem. A novel deep reinforcement learning algorithm is adopted to solve the problem. Evaluation and simulation results show the proposed viewport prediction and VR streaming schemes outperform conventional solutions in terms of prediction accuracy and VR viewing experience.

You may also like...

Paper