空 挡 广 告 位 | 空 挡 广 告 位

Content Adaptive Representations of Omnidirectional Videos for Cinematic Virtual Reality

Note: We don't have the ability to review paper

PubDate: October 2015

Teams: Stanford University

Writers: Matt Yu;Haricharan Lakshman;Bernd Girod

PDF: Content Adaptive Representations of Omnidirectional Videos for Cinematic Virtual Reality

Abstract

Cinematic virtual reality provides an immersive visual experience by presenting omnidirectional videos of real-world scenes. A key challenge is to develop efficient representations of omnidirectional videos in order to maximize coding efficiency under resource constraints, specifically, number of samples and bitrate. We formulate the choice of representation as a multi-dimensional, multiple-choice knapsack problem and show that the resulting representations adapt well to varying content. We also show that separation of the sampling and bit allocation constraints leads to a computationally efficient solution using Lagrangian optimization with only minor performance loss. Results across images and videos show significant coding gains over standard representations.

您可能还喜欢...

Paper