Selective Timewarp Based on Embedded Motion Vectors for Interactive Cloud Virtual Reality

Note: We don't have the ability to review paper

PubDate: December 2018

Teams: Seoul National University of Science and Technology;WILUS Inc

Writers: Thanh Cong Nguyen; Sanghyun Kim; Ju-Hyung Son; Ji-Hoon Yun

PDF: Selective Timewarp Based on Embedded Motion Vectors for Interactive Cloud Virtual Reality


Interactive virtual reality (VR) services such as VR gaming require considerable computing power for rendering higher quality VR images at a high frame rate. Thus offloading such VR processing to a cloud or edge computing entity is promising, but is accompanied by long latency leads for a user to see an image of the past viewport, resulting in prohibitive motion sickness. Time warp is a technique for VR that warps a rendered image before scanning it out to the display to correct for the head movement occurring after the rendering and will play an important role in reducing the perceived latency in cloud/edge VR. However, time warp has to be applied to the image areas excluding the head-locked objects such as head-up display, menu bar, and notification that are intended to have a fixed position on a screen; otherwise, they are shown to judder. In this paper, we propose an algorithm that identifies head-locked objects in encoded VR frames at low computational load with no explicit information of the head-locked objects for selective time warp in the cloud/edge VR environments. First, we make a key observation from testbed experiments that motion vectors embedded in an encoded VR video stream are highly correlated with the user’s head motion. Based on this finding, the head-locked object detection algorithm is designed to: 1) find a raw shape of each head-locked object from motion vectors embedded in frames and 2) identify the exact pixel-level head-locked region within a limited search area (along the boundary of each raw shape) by monitoring pixel color changes over frames. In order to achieve even lower computational load, the costly part of the algorithm is activated only when a new head-locked object appears. The experimental study demonstrates that the algorithm is able to detect multiple head-locked objects with zero pixel error under limited computational load.