Virtual reality with motion parallax by dense optical flow-based depth generation from two spherical images
PubDate: February 2018
Teams: University of Tokyo
Writers: Sarthak Pathak ; Alessandro Moro ; Hiromitsu Fujii ; Atsushi Yamashita ; Hajime Asama
Virtual reality (VR) systems using head-mounted displays (HMDs) can render immersive views of environments, allowing change of viewpoint position and orientation. When there is a change in the position of the viewpoint, different objects in the scene undergo different displacements, depending on their depth. This is known as `Motion Parallax’ and is important for depth perception. It is easy to implement for computer-generated scenes. Spherical cameras like the Ricoh Theta S can capture an all-round view of the environment in a single image, making VR possible for real-world scenes as well. Spherical images contain information from all directions and allow all possible viewpoint orientations. However, implementing motion parallax for real-world scenes is tedious as accurate depth information is required, which is difficult to obtain. In this research, we propose a novel method to easily implement motion parallax for real world scenes by automatically estimating allround depth from two arbitrary spherical images. The proposed method estimates dense optical flow between two images and decomposes it to the depth map. The depth map can be used to reproject the scene accurately to any desired position and orientation, allowing motion parallax.