雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Efficient virtual view rendering by merging pre-rendered RGB-D data from multiple cameras

Note: We don't have the ability to review paper

PubDate: May 2018

Teams: Iwate University

Writers: Yusuke Sasaki; Tadahiro Fujimoto

PDF: Efficient virtual view rendering by merging pre-rendered RGB-D data from multiple cameras

Abstract

A virtual view, or a free-viewpoint video/image, gives us a video/image of an object seen from an arbitrary view-point in a 3D space. One typical approach uses multiview RGB videos captured by multiple RGB cameras surrounding the object. Then, a virtual view is obtained by estimating its 3D shape from the videos. This approach has difficulty in accuracy and efficiency for estimating a 3D geometry from 2D images. Recently, an RGB-D (RGB-Depth) camera is available to capture an RGB video with a depth, which is the distance from the camera to the surface of an object, per pixel. Using an RGB-D camera, the 3D shape of an object surface can be directly obtained without estimating 3D from 2D. However, a single RGB-D camera captures only the 3D shape of the surface part that the camera faces. In this research, we propose a method to efficiently render a virtual view using multiple RGB-D cameras. In our method, the 3D shapes of different surface parts captured by the respective cameras are efficiently merged according to a virtual viewpoint. Each camera is connected to a PC, and all PCs are connected to each other for parallel processing in a PC cluster network. RGB-D data captured by the cameras have to be transferred via the network to merge. Our method effectively reduces the size of RGB-D data to transfer by “view-dependent pre-rendering”, in which “imperfect” virtual views are rendered using original RGB-D data captured by the respective cameras on their PCs in parallel. This pre-rendering greatly contributes to real-time rendering of a final virtual view.

您可能还喜欢...

Paper