跳至内容
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction

CUBE360: Learning Cubic Field Representation for Monocular 360 Depth Estimation for Virtual Reality

编辑:广东客   |   分类:CV   |   2025年3月6日

Note: We don't have the ability to review paper

PubDate: Oct 2024

Teams:USTC; HKUST(GZ)

Writers:Wenjie Chang, Hao Ai, Tianzhu Zhang, Lin Wang

PDF:CUBE360: Learning Cubic Field Representation for Monocular 360 Depth Estimation for Virtual Reality

Abstract

Panoramic images provide comprehensive scene information and are suitable for VR applications. Obtaining corresponding depth maps is essential for achieving immersive and interactive experiences. However, panoramic depth estimation presents significant challenges due to the severe distortion caused by equirectangular projection (ERP) and the limited availability of panoramic RGB-D datasets. Inspired by the recent success of neural rendering, we propose a novel method, named CUBE360, that learns a cubic field composed of multiple MPIs from a single panoramic image for continuous depth estimation at any view direction. Our CUBE360 employs cubemap projection to transform an ERP image into six faces and extract the MPIs for each, thereby reducing the memory consumption required for MPI processing of high-resolution data. Additionally, this approach avoids the computational complexity of handling the uneven pixel distribution inherent to equirectangular projectio. An attention-based blending module is then employed to learn correlations among the MPIs of cubic faces, constructing a cubic field representation with color and density information at various depth levels. Furthermore, a novel sampling strategy is introduced for rendering novel views from the cubic field at both cubic and planar scales. The entire pipeline is trained using photometric loss calculated from rendered views within a self-supervised learning approach, enabling training on 360 videos without depth annotations. Experiments on both synthetic and real-world datasets demonstrate the superior performance of CUBE360 compared to prior SSL methods. We also highlight its effectiveness in downstream applications, such as VR roaming and visual effects, underscoring CUBE360's potential to enhance immersive experiences.

本文链接:https://paper.nweon.com/16230

您可能还喜欢...

  • VideoLifter: Lifting Videos to 3D with Fast Hierarchical Stereo Alignment

    2025年05月08日 广东客

  • FovealNet: Advancing AI-Driven Gaze Tracking Solutions for Optimized Foveated Rendering System Performance in Virtual Reality

    2025年04月30日 广东客

  • e85f01ce3eb5a957ca48eb6e5fe3b984-thumb-medium

    A Review on Deep Learning Techniques Applied to Semantic Segmentation

    2020年07月23日 映维

关注:

最新AR/VR行业分享

  • ★ 暂无数据(等待更新) 2025年12月1日

最新AR/VR专利

  • ★ 暂无数据(等待更新) 2025年12月1日

最新AR/VR行业招聘

  • ★ 暂无数据(等待更新) 2025年12月1日
  • 首页
  • 资讯
  • 资源下载
  • 行业方案
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

备案粤公网安备:44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper