Improving Novel view synthesis of 360◦ Scenes in Extremely Sparse Views by Jointly Training Hemisphere Sampled Synthetic Images
Note: We don't have the ability to review paper
PubDate: May 2025
Teams:Ghent University,University of Dundee
Writers:Guangan Chen, Anh Minh Truong, Hanhe Lin, Michiel Vlaminck, Wilfried Philips, Hiep Luong
Abstract
challenge due to the limited input in sparse-view cases. Retraining a diffusion-based image enhancement model on our created dataset, we further improve the quality of the pointcloud-rendered images by removing artifacts. We compare our framework with benchmark methods in cases of only four input views, demonstrating significant improvement in novel view synthesis under extremely sparse-view conditions for 360◦ scenes. The source code is available at https: //github.com/angchen-dev/hemiSparseGS.