Layout-Guided Novel View Synthesis from a Single Indoor Panorama

Note: We don't have the ability to review paper

PubDate: Mar 2021

Teams: 1ShanghaiTech University 2KooLab, Manycore3Institute of High Performance Computing, A*STAR4Shanghai Engineering Research Center of Intelligent Vision and Imaging

Writers: Jiale Xu, Jia Zheng, Yanyu Xu, Rui Tang, Shenghua Gao

PDF: Layout-Guided Novel View Synthesis from a Single Indoor Panorama

Abstract

Existing view synthesis methods mainly focus on the perspective images and have shown promising results. However, due to the limited field-of-view of the pinhole camera, the performance quickly degrades when large camera movements are adopted. In this paper, we make the first attempt to generate novel views from a single indoor panorama and take the large camera translations into consideration. To tackle this challenging problem, we first use Convolutional Neural Networks (CNNs) to extract the deep features and estimate the depth map from the source-view image. Then, we leverage the room layout prior, a strong structural constraint of the indoor scene, to guide the generation of target views. More concretely, we estimate the room layout in the source view and transform it into the target viewpoint as guidance. Meanwhile, we also constrain the room layout of the generated target-view images to enforce geometric consistency. To validate the effectiveness of our method, we further build a large-scale photo-realistic dataset containing both small and large camera translations. The experimental results on our challenging dataset demonstrate that our method achieves state-of-the-art performance. The project page is at this https URL.

You may also like...

Paper