跳至内容
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction

FlexGen: Flexible Multi-View Generation from Text and Image Inputs

编辑:广东客   |   分类:CV   |   2025年3月27日

Note: We don't have the ability to review paper

PubDate: Otc 2024

Teams:HKUST(GZ)1 HKUST2 Quwan3

Writers:Xinli Xu, Wenhang Ge, Jiantao Lin, Jiawei Feng, Lie Xu, HanFeng Zhao, Shunsi Zhang, Ying-Cong Chen

PDF:FlexGen: Flexible Multi-View Generation from Text and Image Inputs

Abstract

In this work, we introduce FlexGen, a flexible framework designed to generate controllable and consistent multi-view images, conditioned on a single-view image, or a text prompt, or both. FlexGen tackles the challenges of controllable multi-view synthesis through additional conditioning on 3D-aware text annotations. We utilize the strong reasoning capabilities of GPT-4V to generate 3D-aware text annotations. By analyzing four orthogonal views of an object arranged as tiled multi-view images, GPT-4V can produce text annotations that include 3D-aware information with spatial relationship. By integrating the control signal with proposed adaptive dual-control module, our model can generate multi-view images that correspond to the specified text. FlexGen supports multiple controllable capabilities, allowing users to modify text prompts to generate reasonable and corresponding unseen parts. Additionally, users can influence attributes such as appearance and material properties, including metallic and roughness. Extensive experiments demonstrate that our approach offers enhanced multiple controllability, marking a significant advancement over existing multi-view diffusion models. This work has substantial implications for fields requiring rapid and flexible 3D content creation, including game development, animation, and virtual reality. Project page: this https URL.

本文链接:https://paper.nweon.com/16252

您可能还喜欢...

  • M3D: Dual-Stream Selective State Spaces and Depth-Driven Framework for High-Fidelity Single-View 3D Reconstruction

    2025年04月17日 广东客

  • Model-based 3D Hand Reconstruction via Self-Supervised Learning

    2021年06月30日 映维

  • e2b5817422bb218d0008e6923060ced2-thumb-medium

    Shape and Material Capture at Home

    2021年07月01日 映维

关注:

最新AR/VR行业分享

  • ★ 暂无数据(等待更新) 2025年12月8日

最新AR/VR专利

  • ★ 暂无数据(等待更新) 2025年12月8日

最新AR/VR行业招聘

  • ★ 暂无数据(等待更新) 2025年12月8日
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

备案粤公网安备:44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper