跳至内容
  • 首页
  • 资讯
  • 行业方案
  • 付费阅读
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction
空 挡 广 告 位 | 空 挡 广 告 位

FlexGen: Flexible Multi-View Generation from Text and Image Inputs

小编 广东客   |   分类:CV   |   2025年3月27日

Note: We don't have the ability to review paper

PubDate: Otc 2024

Teams:HKUST(GZ)1 HKUST2 Quwan3

Writers:Xinli Xu, Wenhang Ge, Jiantao Lin, Jiawei Feng, Lie Xu, HanFeng Zhao, Shunsi Zhang, Ying-Cong Chen

PDF:FlexGen: Flexible Multi-View Generation from Text and Image Inputs

Abstract

In this work, we introduce FlexGen, a flexible framework designed to generate controllable and consistent multi-view images, conditioned on a single-view image, or a text prompt, or both. FlexGen tackles the challenges of controllable multi-view synthesis through additional conditioning on 3D-aware text annotations. We utilize the strong reasoning capabilities of GPT-4V to generate 3D-aware text annotations. By analyzing four orthogonal views of an object arranged as tiled multi-view images, GPT-4V can produce text annotations that include 3D-aware information with spatial relationship. By integrating the control signal with proposed adaptive dual-control module, our model can generate multi-view images that correspond to the specified text. FlexGen supports multiple controllable capabilities, allowing users to modify text prompts to generate reasonable and corresponding unseen parts. Additionally, users can influence attributes such as appearance and material properties, including metallic and roughness. Extensive experiments demonstrate that our approach offers enhanced multiple controllability, marking a significant advancement over existing multi-view diffusion models. This work has substantial implications for fields requiring rapid and flexible 3D content creation, including game development, animation, and virtual reality. Project page: this https URL.

本文链接:https://paper.nweon.com/16252

您可能还喜欢...

  • SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition

    2021年06月28日 映维

  • ImVoteNet: Boosting 3D object detection in point clouds with image votes

    2020年06月13日 映维

  • 3D Virtual Garment Modeling from RGB Images

    2020年08月12日 映维

关注:

RSS 最新AR/VR行业分享

  • 2025年06月07日美国专利局新申请AR/VR专利摘选 2025年6月7日
  • Meta CTO:市场是滞后性指标,成功需要前瞻性和自信 2025年6月7日
  • 采用自我合作时间循环机制,《UnLoop》将登陆Meta Quest,PICO和Steam 2025年6月7日
  • Squido Studio发布仿GTA的犯罪冒险游戏《Grand Theft Animals》 2025年6月7日
  • 传Etsy创始人在打造如同手持式放大镜的XR设备 2025年6月7日

RSS 最新AR/VR专利

  • Meta Patent | Garment integrated capacitive sensor produced from an insulated knitted layer and a non-insulated knitted layer and uses thereof 2025年6月5日
  • Snap Patent | Package delivery assistance using wearable device 2025年6月5日
  • HTC Patent | Wearable device and communication method for enhancing detection accuracy 2025年6月5日
  • Google Patent | Wearable device imu intrinsic calibration 2025年6月5日
  • Meta Patent | Systems and methods for combining polarization information with time-of-flight information 2025年6月5日

RSS 最新AR/VR行业招聘

  • Apple AR/VR Job | Senior Software QA Engineer - Apple Vision Pro 2024年11月12日
  • Apple AR/VR Job | System Product Design Engineer - Apple Vision Pro 2024年11月12日
  • Microsoft AR/VR Job | Principal Software Engineer -Teams Premium Services 2024年11月12日
  • Meta AR/VR Job | Software Engineer - XR Codec Interactions and Avatars Team 2024年11月12日
  • Meta AR/VR Job | Product Cost Engineer 2024年11月12日

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

粤公网安备 44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper