空 挡 广 告 位 | 空 挡 广 告 位

NeRF-Editing: Geometry Editing of Neural Radiance Fields

Note: We don't have the ability to review paper

PubDate: May 2020

Teams: Chinese Academy of Sciences;University of Chinese Academy of Sciences; Cardiff University;Alibaba Group

Writers: Yu-Jie Yuan;Yang-Tian Sun;Yu-Kun Lai;Yuewen Ma;Rongfei Jia;Lin Gao

PDF:

Abstract

Implicit neural rendering, especially Neural Radiance Field (NeRF), has shown great potential in novel view synthesis of a scene. However, current NeRF-based methods cannot enable users to perform user-controlled shape deformation in the scene. While existing works have proposed some approaches to modify the radiance field according to the user’s constraints, the modification is limited to color editing or object translation and rotation. In this paper, we propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene, and synthesizes the novel view images of the edited scene without re-training the network. Specifically, we establish a correspondence between the extracted explicit mesh representation and the implicit neural representation of the target scene. Users can first utilize welldeveloped mesh-based deformation methods to deform the mesh representation of the scene. Our method then utilizes
user edits from the mesh representation to bend the camera rays by introducing a tetrahedra mesh as a proxy, obtaining the rendering results of the edited scene. Extensive experiments demonstrate that our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.

您可能还喜欢...

Paper