Learning to compose 6-DoF omnidirectional videos using multi-sphere images

Note: We don't have the ability to review paper

PubDate: Mar 2021

Teams: ∗Tsinghua University;Research Institute of Tsinghua University in Shenzhen

Writers: Jisheng Li, Yuze He, Yubin Hu, Yuxing Han, Jiangtao Wen

PDF: Learning to compose 6-DoF omnidirectional videos using multi-sphere images

Abstract

Omnidirectional video is an essential component of Virtual Reality. Although various methods have been proposed to generate content that can be viewed with six degrees of freedom (6-DoF), existing systems usually involve complex depth estimation, image in-painting or stitching pre-processing. In this paper, we propose a system that uses a 3D ConvNet to generate a multi-sphere images (MSI) representation that can be experienced in 6-DoF VR. The system utilizes conventional omnidirectional VR camera footage directly without the need for a depth map or segmentation mask, thereby significantly simplifying the overall complexity of the 6-DoF omnidirectional video composition. By using a newly designed weighted sphere sweep volume (WSSV) fusing technique, our approach is compatible with most panoramic VR camera setups. A ground truth generation approach for high-quality artifact-free 6-DoF contents is proposed and can be used by the research and development community for 6-DoF content generation.

You may also like...

Paper