跳至内容
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction

3D Vision-Language Gaussian Splatting

编辑:广东客   |   分类:CV   |   2025年3月6日

Note: We don't have the ability to review paper

PubDate: Otc 2024

Teams:University of Central Florida;United Imaging Intelligence, Boston MA

Writers:Qucheng Peng, Benjamin Planche, Zhongpai Gao, Meng Zheng, Anwesa Choudhuri, Terrence Chen, Chen Chen, Ziyan Wu

PDF:3D Vision-Language Gaussian Splatting

Abstract

Recent advancements in 3D reconstruction methods and vision-language models have propelled the development of multi-modal 3D scene understanding, which has vital applications in robotics, autonomous driving, and virtual/augmented reality. However, current multi-modal scene understanding approaches have naively embedded semantic representations into 3D reconstruction methods without striking a balance between visual and language modalities, which leads to unsatisfying semantic rasterization of translucent or reflective objects, as well as over-fitting on color modality. To alleviate these limitations, we propose a solution that adequately handles the distinct visual and semantic modalities, i.e., a 3D vision-language Gaussian splatting model for scene understanding, to put emphasis on the representation learning of language modality. We propose a novel cross-modal rasterizer, using modality fusion along with a smoothed semantic indicator for enhancing semantic rasterization. We also employ a camera-view blending technique to improve semantic consistency between existing and synthesized views, thereby effectively mitigating over-fitting. Extensive experiments demonstrate that our method achieves state-of-the-art performance in open-vocabulary semantic segmentation, surpassing existing methods by a significant margin.

本文链接:https://paper.nweon.com/16233

标签: Intel

您可能还喜欢...

  • 828f2ac2aae15cc6fdc16a8a8109a2bc-thumb-medium

    3DFusion, A real-time 3D object reconstruction pipeline based on streamed instance segmented data

    2023年11月22日 映维

  • 9459e769270f8d06f2baa97af9e60855-thumb-medium

    SceneGraphFusion: Incremental 3D Scene Graph Prediction from RGB-D Sequences

    2021年07月01日 映维

  • A Database for Perceived Quality Assessment of User-Generated VR Videos

    2022年06月28日 映维

关注:

最新AR/VR行业分享

  • ★ 高通和中移动发布“睛彩无界”弱视儿童VR关爱计划,现场演示基于PICO的解决方案 2025年10月21日
  • ★ 苹果推送visionOS 26.1开发者预览版23N5042a更新 2025年10月21日
  • ★ 广州执信中学计划以230万元采购VR/STEAM课室设备 2025年10月21日
  • ★ 东台市第一小学计划以160万建设VR实验室、数字探究实验室 2025年10月21日
  • ★ 苹果或为M5版Vision Pro推出新开发者头带,但不完全兼容初代头显 2025年10月21日

最新AR/VR专利

  • ★ Sony Patent | System and methods for electronic gaming control and performance normalization using artificial intelligence 2025年10月16日
  • ★ Microsoft Patent | Utilizing noise threshold conditions for controlling scanning mirrors 2025年10月16日
  • ★ Samsung Patent | Wearable device for controlling at least one virtual object according to attributes of at least one virtual object, and method for controlling same 2025年10月16日
  • ★ Meta Patent | Techniques for interactive visualization for workspace awareness in collaborative authoring of metaverse environments, and systems and methods of use thereof 2025年10月16日
  • ★ Apple Patent | Systems, methods, and graphical user interfaces for modeling, measuring, and drawing using augmented reality 2025年10月16日

最新AR/VR行业招聘

  • ★ Microsoft AR/VR Job | High Performance Compute, Director 2025年6月5日
  • ★ Microsoft AR/VR Job | Data Center Technician/ Technicien de Centre de Données 2025年6月3日
  • ★ Microsoft AR/VR Job | Senior Product Designer 2025年5月16日
  • ★ Apple AR/VR Job | AirPlay Audio Engineer 2025年3月27日
  • ★ Apple AR/VR Job | iOS Perception Engineer 2025年3月27日
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

备案粤公网安备:44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper