雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Constrained Deep Reinforcement Learning for Low-Latency Wireless VR Video Streaming

Note: We don't have the ability to review paper

PubDate: February 2022

Teams: The University of Sydney

Writers: Shaoang Li; Changyang She; Yonghui Li; Branka Vucetic

PDF: Constrained Deep Reinforcement Learning for Low-Latency Wireless VR Video Streaming

Abstract

Wireless virtual reality (VR) systems are able to provide users with immersive experiences, and require low latency and high data rate. To meet these conflicting requirements with limited radio resources, edge intelligence is a promising architecture. It exploits the edge server co-located at the base station to predict the field of view (FoV) of the next VR video segment, pre-render the three-dimensional video within the predicted FoV, and transmit it to the user in advance. Since the prediction is not error-free, the predicted FoV may not cover the actual FoV requested by the user, and hence may result in video quality loss. To address this issue, we first formulate a constrained partially observable Markov decision process problem to optimize the redundant range of the FoV according to the head motion prediction and the redundant range for the previous video segment. Then, we develop a constrained deep reinforcement learning algorithm to minimize the video quality loss ratio subject to the latency constraint. Simulation results show that the proposed algorithm outperforms the existing methods in terms of video quality loss ratio (from 6.9% to 4.9%) and latency (from 0.72 s to 0.63 s).

您可能还喜欢...

Paper