空 挡 广 告 位 | 空 挡 广 告 位

IMMERSIVE LIGHT FIELD VIDEO WITH A LAYERED MESH REPRESENTATION

Note: We don't have the ability to review paper

PubDate: June 2020

Teams: Google

Writers: Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, Paul Debevec

PDF: IMMERSIVE LIGHT FIELD VIDEO WITH A LAYERED MESH REPRESENTATION

Project: IMMERSIVE LIGHT FIELD VIDEO WITH A LAYERED MESH REPRESENTATION

Abstract

We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We record immersive light fields using a custom array of 46 time-synchronized cameras distributed on the surface of a hemispherical, 92cm diameter dome. From this data we produce 6DOF volumetric videos with a wide 80-cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view (>220 degrees), at 30fps video frame rates. Even though the cameras are placed 18cm apart on average, our system can reconstruct objects as close as 20cm to the camera rig. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser.

您可能还喜欢...

Paper