空 挡 广 告 位 | 空 挡 广 告 位

Live 3D Portrait: Real-Time Radiance Fields for Single-Image Portrait View Synthesis

Note: We don't have the ability to review paper

PubDate: Aug 2023

Teams: 1 UC San Diego 2 NVIDIA 3 Stanford University

Writers: Alex Trevithick 1 Matthew Chan 2 Michael Stengel 2 Eric R. Chan 3 Chao Liu 2 Zhiding Yu 2 Sameh Khamis 2 Manmohan Chandraker 1 Ravi Ramamoorthi 1 Koki Nagano 2

PDF: Live 3D Portrait: Real-Time Radiance Fields for Single-Image Portrait View Synthesis

Project: Live 3D Portrait: Real-Time Radiance Fields for Single-Image Portrait View Synthesis

Abstract

We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e.g., face portrait) in real-time. Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering. Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization. To train our triplane encoder pipeline, we use only synthetic data, showing how to distill the knowledge from a pretrained 3D GAN into a feedforward encoder. Technical contributions include a Vision Transformer-based triplane encoder, a camera data augmentation strategy, and a well-designed loss function for synthetic data training. We benchmark against the state-of-the-art methods, demonstrating significant improvements in robustness and image quality in challenging real-world settings. We showcase our results on portraits of faces (FFHQ) and cats (AFHQ), but our algorithm can also be applied in the future to other categories with a 3D-aware image generator.

您可能还喜欢...

Paper