空 挡 广 告 位 | 空 挡 广 告 位

SNeRF: Stylized Neural Implicit Representations for 3D Scenes

Note: We don't have the ability to review paper

PubDate: Jul 2022

Teams: Meta

Writers: Thu Nguyen-Phuoc, Feng Liu, Lei Xiao

Supplementary Video: SNeRF: Stylized Neural Implicit Representations for 3D Scenes

PDF: SNeRF: Stylized Neural Implicit Representations for 3D Scenes

Abstract

This paper presents a stylized novel view synthesis method. Applying state-of-the-art stylization methods to novel views frame by frame often causes jittering artifacts due to the lack of cross-view consistency. Therefore, this paper investigates 3D scene stylization that provides a strong inductive bias for consistent novel view synthesis. Specifically, we adopt the emerging neural radiance fields (NeRF) as our choice of 3D scene representation for their capability to render high-quality novel views for a variety of scenes. However, as rendering a novel view from a NeRF requires a large number of samples, training a stylized NeRF requires a large amount of GPU memory that goes beyond an off-the-shelf GPU capacity. We introduce a new training method to address this problem by alternating the NeRF and stylization optimization steps. Such a method enables us to make full use of our hardware memory capacity to both generate images at higher resolution and adopt more expressive image style transfer methods. Our experiments show that our method produces stylized NeRFs for a wide range of content, including indoor, outdoor and dynamic scenes, and synthesizes high-quality novel views with cross-view consistency.

您可能还喜欢...

Paper