雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Light Field Neural Rendering

Note: We don't have the ability to review paper

PubDate: Mar 2022

Teams: University of British Columbia 2Vector Institute for AI 3Canada CIFAR AI Chair 4Google

Writers: Mohammed Suhail1, Carlos Esteves2, Leonid Sigal1, Ameesh Makadia2

PDF: Light Field Neural Rendering

Abstract

Classical light field rendering for novel view synthesis can accurately reproduce view-dependent effects such as reflection, refraction, and translucency, but requires a dense view sampling of the scene. Methods based on geometric reconstruction need only sparse views, but cannot accurately model non-Lambertian effects. We introduce a model that combines the strengths and mitigates the limitations of these two directions. By operating on a four-dimensional representation of the light field, our model learns to represent view-dependent effects accurately. By enforcing geometric constraints during training and inference, the scene geometry is implicitly learned from a sparse set of views. Concretely, we introduce a two-stage transformer-based model that first aggregates features along epipolar lines, then aggregates features along reference views to produce the color of a target ray. Our model outperforms the state-of-the-art on multiple forward-facing and 360° datasets, with larger margins on scenes with severe view-dependent variations.

您可能还喜欢...

Paper