Text2Immersion: Generative Immersive Scene with 3D Gaussians

Note: We don't have the ability to review paper

PubDate: Dec 2023

Teams:HKUST;google

Writers:Hao Ouyang, Kathryn Heal, Stephen Lombardi, Tiancheng Sun

PDF:Text2Immersion: Generative Immersive Scene with 3D Gaussians

Abstract

 We introduce Text2Immersion, an elegant method for producing high-quality 3D immersive scenes from text prompts. Our proposed pipeline initiates by progressively generating a Gaussian cloud using pre-trained 2D diffusion and depth estimation models. This is followed by a refining stage on the Gaussian cloud, interpolating and refining it to enhance the details of the generated scene. Distinct from prevalent methods that focus on single object or indoor scenes, or employ zoom-out trajectories, our approach generates diverse scenes with various objects, even extending to the creation of imaginary scenes. Consequently, Text2Immersion can have wide-ranging implications for various applications such as virtual reality, game development, and automated content creation. Extensive evaluations demonstrate that our system surpasses other methods in rendering quality and diversity, further progressing towards text-driven 3D scene generation. We will make the source code publicly accessible at the project page.

You may also like...

Paper