空 挡 广 告 位 | 空 挡 广 告 位

Learning to Predict Head Pose in Remotely-Rendered Virtual Reality

Note: We don't have the ability to review paper

PubDate:May 2023

Teams:Nokia Technologies;Aalto University;University of Helsinki

Writers:Gazi Karam Illahi,Ashutosh Vaishnav,Teemu Kämäräinen,Matti Siekkinen,Mario Di Francesco

PDF:Learning to Predict Head Pose in Remotely-Rendered Virtual Reality

Abstract

Accurate characterization of Head Mounted Display (HMD) pose in a virtual scene is essential for rendering immersive graphics in Extended Reality (XR). Remote rendering employs servers in the cloud or at the edge of the network to overcome the computational limitations of either standalone or tethered HMDs. Unfortunately, it increases the latency experienced by the user; for this reason, predicting HMD pose in advance is highly beneficial, as long as it achieves high accuracy. This work provides a thorough characterization of solutions that forecast HMD pose in remotely-rendered virtual reality (VR) by considering six degrees of freedom. Specifically, it provides an extensive evaluation of pose representations, forecasting methods, machine learning models, and the use of multiple modalities along with joint and separate training. In particular, a novel three-point representation of pose is introduced together with a data fusion scheme for long-term short-term memory (LSTM) neural networks. Our findings show that machine learning models benefit from using multiple modalities, even though simple statistical models perform surprisingly well. Moreover, joint training is comparable to separate training with carefully chosen pose representation and data fusion strategies.

您可能还喜欢...

Paper