空 挡 广 告 位 | 空 挡 广 告 位

Visual-Assisted Sound Source Depth Estimation in the Wild

Note: We don't have the ability to review paper

PubDate: Jul 2022

Teams: The University of Texas at Austin

Writers: Wei Sun, Lili Qiu

PDF: Visual-Assisted Sound Source Depth Estimation in the Wild

Abstract

Depth estimation enables a wide variety of 3D applications, such as robotics, autonomous driving, and virtual reality. Despite significant work in this area, it remains open how to enable accurate, low-cost, high-resolution, and large-range depth estimation. Inspired by the flash-to-bang phenomenon (i.e. hearing the thunder after seeing the lightning), this paper develops FBDepth, the first audio-visual depth estimation framework. It takes the difference between the time-of-flight (ToF) of the light and the sound to infer the sound source depth. FBDepth is the first to incorporate video and audio with both semantic features and spatial hints for range estimation. It first aligns correspondence between the video track and audio track to locate the target object and target sound in a coarse granularity. Based on the observation of moving objects’ trajectories, FBDepth proposes to estimate the intersection of optical flow before and after the sound production to locate video events in time. FBDepth feeds the estimated timestamp of the video event and the audio clip for the final depth estimation. We use a mobile phone to collect 3000+ video clips with 20 different objects at up to 50m. FBDepth decreases the Absolute Relative error (AbsRel) by 55\% compared to RGB-based methods.

您可能还喜欢...

Paper