空 挡 广 告 位 | 空 挡 广 告 位

User-Adaptive Editing for 360 degree Video Streaming with Deep Reinforcement Learning

Note: We don't have the ability to review paper

PubDate: October 2019

Teams: Universite Cote d’Azur

Writers: Lucile Sassatelli;Marco Winckler;Thomas Fisichella;Ramon Aparicio

PDF:

Abstract

The development through streaming of 360\degree\ videos is persistently hindered by how much bandwidth they require. Adapting spatially the quality of the sphere to the user’s Field of View (FoV) lowers the data rate but requires to keep the playback buffer small, to predict the user’s motion or to make replacements to keep the buffered qualities up to date with the moving FoV, all three being uncertain and risky. We have previously shown that opportunistically regaining control on the FoV with active attention-driving techniques makes for additional levers to ease streaming and improve Quality of Experience (QoE). Deep neural networks have been recently shown to achieve best performance for video streaming adaptation and head motion prediction. This demo presents a step ahead in the important investigation of deep neural network approaches to obtain user-adaptive and network-adaptive 360 degree video streaming systems. In this demo, we show how snap-changes, an attention-driving technique, can be automatically modulated by the user’s motion to improve the streaming QoE. The control of snap-changes is made with a deep neural network trained on head motion traces with the Deep Reinforcement Learning strategy A3C.

您可能还喜欢...

Paper