跳至内容
  • 首页
  • 资讯
  • 行业方案
  • 付费阅读
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction

Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars

小编 广东客   |   分类:CV   |   发布日期 2025年3月4日

Note: We don't have the ability to review paper

PubDate: Feb 2025

Teams:Technical University of Munich,Meta Reality Labs

Writers:Tobias Kirschstein, Javier Romero, Artem Sevastopolsky, Matthias Nießner, Shunsuke Saito

PDF:Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars

Abstract

Traditionally, creating photo-realistic 3D head avatars requires a studio-level multi-view capture setup and expensive optimization during test-time, limiting the use of digital human doubles to the VFX industry or offline renderings.

To address this shortcoming, we present Avat3r, which regresses a high-quality and animatable 3D head avatar from just a few input images, vastly reducing compute requirements during inference. More specifically, we make Large Reconstruction Models animatable and learn a powerful prior over 3D human heads from a large multi-view video dataset. For better 3D head reconstructions, we employ position maps from DUSt3R and generalized feature maps from the human foundation model Sapiens. To animate the 3D head, our key discovery is that simple cross-attention to an expression code is already sufficient. Finally, we increase robustness by feeding input images with different expressions to our model during training, enabling the reconstruction of 3D head avatars from inconsistent inputs, e.g., an imperfect phone capture with accidental movement, or frames from a monocular video.

We compare Avat3r with current state-of-the-art methods for few-input and single-input scenarios, and find that our method has a competitive advantage in both tasks. Finally, we demonstrate the wide applicability of our proposed model, creating 3D head avatars from images of different sources, smartphone captures, single images, and even out-of-domain inputs like antique busts.

本文链接:https://paper.nweon.com/16226

您可能还喜欢...

  • SAM-guided Pseudo Label Enhancement for Multi-modal 3D Semantic Segmentation

    2025年05月22日 广东客

  • NeRD: Neural 3D Reflection Symmetry Detector

    2021年06月29日 映维

  • Style-Consistent 3D Indoor Scene Synthesis with Decoupled Objects

    2024年07月16日 广东客

关注:

RSS 最新AR/VR行业分享

  • XR日报: Cellid开发60° FOV AR投影仪,苹果调研用户规划XR产品路线 2025年7月1日
  • 武当山推出基于PICO VR的"入境武当"VR大空间文旅项目 2025年7月1日
  • Cellid开发60度视场AR眼镜微型投影仪 2025年7月1日
  • PICO推出暑期科幻季内容分享活动 2025年7月1日
  • HealthpointCapital收购XR医疗公司ImmersiveTouch多数股权 2025年7月1日

RSS 最新AR/VR专利

  • Samsung Patent | Deposition equipment 2025年6月26日
  • Snap Patent | Determining gaze direction to generate augmented reality content 2025年6月26日
  • Apple Patent | Eye characteristic determination 2025年6月26日
  • Snap Patent | Eyewear device charging case 2025年6月26日
  • Qualcomm Patent | Vehicle and mobile device interface for vehicle occupant assistance 2025年6月26日

RSS 最新AR/VR行业招聘

  • Microsoft AR/VR Job | High Performance Compute, Director 2025年6月5日
  • Microsoft AR/VR Job | Data Center Technician/ Technicien de Centre de Données 2025年6月3日
  • Microsoft AR/VR Job | Senior Product Designer 2025年5月16日
  • Apple AR/VR Job | AirPlay Audio Engineer 2025年3月27日
  • Apple AR/VR Job | iOS Perception Engineer 2025年3月27日

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

粤公网安备 44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper