跳至内容
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction

CUBE360: Learning Cubic Field Representation for Monocular 360 Depth Estimation for Virtual Reality

编辑:广东客   |   分类:CV   |   2025年3月6日

Note: We don't have the ability to review paper

PubDate: Oct 2024

Teams:USTC; HKUST(GZ)

Writers:Wenjie Chang, Hao Ai, Tianzhu Zhang, Lin Wang

PDF:CUBE360: Learning Cubic Field Representation for Monocular 360 Depth Estimation for Virtual Reality

Abstract

Panoramic images provide comprehensive scene information and are suitable for VR applications. Obtaining corresponding depth maps is essential for achieving immersive and interactive experiences. However, panoramic depth estimation presents significant challenges due to the severe distortion caused by equirectangular projection (ERP) and the limited availability of panoramic RGB-D datasets. Inspired by the recent success of neural rendering, we propose a novel method, named CUBE360, that learns a cubic field composed of multiple MPIs from a single panoramic image for continuous depth estimation at any view direction. Our CUBE360 employs cubemap projection to transform an ERP image into six faces and extract the MPIs for each, thereby reducing the memory consumption required for MPI processing of high-resolution data. Additionally, this approach avoids the computational complexity of handling the uneven pixel distribution inherent to equirectangular projectio. An attention-based blending module is then employed to learn correlations among the MPIs of cubic faces, constructing a cubic field representation with color and density information at various depth levels. Furthermore, a novel sampling strategy is introduced for rendering novel views from the cubic field at both cubic and planar scales. The entire pipeline is trained using photometric loss calculated from rendered views within a self-supervised learning approach, enabling training on 360 videos without depth annotations. Experiments on both synthetic and real-world datasets demonstrate the superior performance of CUBE360 compared to prior SSL methods. We also highlight its effectiveness in downstream applications, such as VR roaming and visual effects, underscoring CUBE360's potential to enhance immersive experiences.

本文链接:https://paper.nweon.com/16230

您可能还喜欢...

  • 57f92011d66f72ed8110469424a86311-thumb-medium

    Generating Animatable 3D Cartoon Faces from Single Portraits

    2023年08月10日 映维

  • abeeef72635d8dfbe04293e523217b2a-thumb-medium

    Text2Immersion: Generative Immersive Scene with 3D Gaussians

    2023年12月27日 广东客

  • 17b249f77bb8dde3b6e11fbea7fcfb6b-thumb-medium

    Component Mapping Method for Indoor Localization System based on Mixed Reality

    2020年11月11日 映维

关注:

最新AR/VR行业分享

  • ★ 暂无数据(等待更新) 2025年11月28日

最新AR/VR专利

  • ★ 暂无数据(等待更新) 2025年11月28日

最新AR/VR行业招聘

  • ★ 暂无数据(等待更新) 2025年11月28日
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

备案粤公网安备:44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper