雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Motion Compensated Prediction for Translational Camera Motion in Spherical Video Coding

Note: We don't have the ability to review paper

PubDate: November 2018

Teams: University of California

Writers: Bharath Vishwanath; Tejaswi Nanjundaswamy; Kenneth Rose

PDF: Motion Compensated Prediction for Translational Camera Motion in Spherical Video Coding

Abstract

Spherical video is the key driving factor for the growth of virtual reality and augmented reality applications, as it offers truly immersive experience by capturing the entire 3D surroundings. However, it represents an enormous amount of data for storage/transmission and success of all related applications is critically dependent on efficient compression. A frequently encountered type of content in this video format is due to translational motion of the camera (e.g., a camera mounted on a moving vehicle). Existing approaches simply project this video onto a plane and use block based translational motion model for capturing the motion of the objects between the frames. This ad-hoc simplified approach completely ignores the complex deformities of objects caused due to the combined effect of the moving camera and projection onto a plane, rendering it significantly suboptimal. In this paper, we provide an efficient solution tailored to this problem. Specifically, we propose to perform motion compensated prediction by translating pixels along their geodesics, which intersect at the poles corresponding to the camera velocity vector. This setup not only captures the surrounding objects’ motion exactly along the geodesics of the sphere, but also accurately accounts for the deformations caused due to projection on the sphere. Experimental results demonstrate that the proposed framework achieves very significant gains over existing motion models.

您可能还喜欢...

Paper