Spherical Convolution empowered FoV Prediction in 360-degree Video Multicast with Limited FoV Feedback

Note: We don't have the ability to review paper

PubDate: Jan 2022

Teams: Hefei University of Technology;The University of Electro-Communications

Writers: Jie Li, Ling Han, Cong Zhang, Qiyue Li, Zhi Liu

PDF: Spherical Convolution empowered FoV Prediction in 360-degree Video Multicast with Limited FoV Feedback

Abstract

Field of view (FoV) prediction is critical in 360-degree video multicast, which is a key component of the emerging Virtual Reality (VR) and Augmented Reality (AR) applications. Most of the current prediction methods combining saliency detection and FoV information neither take into account that the distortion of projected 360-degree videos can invalidate the weight sharing of traditional convolutional networks, nor do they adequately consider the difficulty of obtaining complete multi-user FoV information, which degrades the prediction performance. This paper proposes a spherical convolution-empowered FoV prediction method, which is a multi-source prediction framework combining salient features extracted from 360-degree video with limited FoV feedback information. A spherical convolution neural network (CNN) is used instead of a traditional two-dimensional CNN to eliminate the problem of weight sharing failure caused by video projection distortion. Specifically, salient spatial-temporal features are extracted through a spherical convolution-based saliency detection model, after which the limited feedback FoV information is represented as a time-series model based on a spherical convolution-empowered gated recurrent unit network. Finally, the extracted salient video features are combined to predict future user FoVs. The experimental results show that the performance of the proposed method is better than other prediction methods.

You may also like...

Paper