跳至内容
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction

Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars

编辑:广东客   |   分类:CV   |   2025年3月4日

Note: We don't have the ability to review paper

PubDate: Feb 2025

Teams:Technical University of Munich,Meta Reality Labs

Writers:Tobias Kirschstein, Javier Romero, Artem Sevastopolsky, Matthias Nießner, Shunsuke Saito

PDF:Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars

Abstract

Traditionally, creating photo-realistic 3D head avatars requires a studio-level multi-view capture setup and expensive optimization during test-time, limiting the use of digital human doubles to the VFX industry or offline renderings.

To address this shortcoming, we present Avat3r, which regresses a high-quality and animatable 3D head avatar from just a few input images, vastly reducing compute requirements during inference. More specifically, we make Large Reconstruction Models animatable and learn a powerful prior over 3D human heads from a large multi-view video dataset. For better 3D head reconstructions, we employ position maps from DUSt3R and generalized feature maps from the human foundation model Sapiens. To animate the 3D head, our key discovery is that simple cross-attention to an expression code is already sufficient. Finally, we increase robustness by feeding input images with different expressions to our model during training, enabling the reconstruction of 3D head avatars from inconsistent inputs, e.g., an imperfect phone capture with accidental movement, or frames from a monocular video.

We compare Avat3r with current state-of-the-art methods for few-input and single-input scenarios, and find that our method has a competitive advantage in both tasks. Finally, we demonstrate the wide applicability of our proposed model, creating 3D head avatars from images of different sources, smartphone captures, single images, and even out-of-domain inputs like antique busts.

本文链接:https://paper.nweon.com/16226

您可能还喜欢...

  • Advancements in Point Cloud-Based 3D Defect Detection and Classification for Industrial Systems: A Comprehensive Survey

    2024年07月30日 映维

  • c442e87e9866f31e2584adb8054b006a-thumb-medium

    RGB-D Odometry And SLAM

    2020年08月20日 映维

  • 6fd0bba6a569f38e6c041ebb3d7dd0ca-thumb-medium

    Towards Markerless Grasp Capture

    2020年08月12日 映维

关注:

最新AR/VR行业分享

  • ★ 映维日报:三星XR头显初体验,阿里AI眼镜预售4699元起,Unity 6支持Android XR开发 2025年10月24日
  • ★ 三星Galaxy XR头显初体验:佩戴、显示、追踪与性能 2025年10月24日
  • ★ 阿里夸克AI眼镜开启预售,售价4699元起,88VIP折后3699元 2025年10月24日
  • ★ Unity 6 正式支持Android XR开发 2025年10月24日
  • ★ 谷歌发布Android XR与Unity开发指南 2025年10月24日

最新AR/VR专利

  • ★ Snap Patent | Selecting an audio track in association with multi-video clip capture 2025年10月23日
  • ★ Meta Patent | Systems and methods for seamless user interface rendering using multiple processors of a device 2025年10月23日
  • ★ Snap Patent | Generative ai experience with movement 2025年10月23日
  • ★ Meta Patent | Coordination between independent rendering frameworks 2025年10月23日
  • ★ Apple Patent | Devices, methods and graphical user interfaces for content applications 2025年10月23日

最新AR/VR行业招聘

  • ★ Microsoft AR/VR Job | High Performance Compute, Director 2025年6月5日
  • ★ Microsoft AR/VR Job | Data Center Technician/ Technicien de Centre de Données 2025年6月3日
  • ★ Microsoft AR/VR Job | Senior Product Designer 2025年5月16日
  • ★ Apple AR/VR Job | AirPlay Audio Engineer 2025年3月27日
  • ★ Apple AR/VR Job | iOS Perception Engineer 2025年3月27日
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

备案粤公网安备:44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper