空 挡 广 告 位 | 空 挡 广 告 位

A Viewport Prediction Framework for Panoramic Videos

Note: We don't have the ability to review paper

PubDate: September 2020

Teams: Shenzhen University

Writers: Jinting Tang; Yongkai Huo; Shaoshi Yang; Jianmin Jiang

PDF: A Viewport Prediction Framework for Panoramic Videos

Abstract

Panoramic video is considered to be an attractive video format, since it provides the viewers with an immersive experience, such as virtual reality (VR) gaming. However, the viewers only focus on part of panoramic video, which is referred to as viewport. Hence, the resources consumed for distributing the remaining part of the panoramic video are wasted. It is intuitive to only deliver the video data within this viewport for reducing the distribution cost. Empirically, viewports within a time interval are highly correlated, hence the historical trajectory may be used for predicting the future viewports. On the other hand, a viewer tends to sustain attention on a specific object in a panoramic video. Motivated by these findings, we propose a deep learning-based viewport Prediction scheme, namely HOP, where the Historical viewport trajectory of viewers and Object tracking are jointly exploited by the long short-term memory (LSTM) networks. Additionally, our solution is capable of predicting multiple future viewports, while a single viewport prediction was supported by the state-of-the-art contributions. Simulation results show that our proposed HOP scheme outperforms the benchmarkers by up to 33.5% in terms of the prediction error.

您可能还喜欢...

Paper