空 挡 广 告 位 | 空 挡 广 告 位

Audio Signal Processing for Telepresence Based on Wearable Array in Noisy and Dynamic Scenes

Note: We don't have the ability to review paper

PubDate: Feb 16 2022

Teams: Ben-Gurion University of the Negev;Reality Labs Research

Writers: Hanan Beit-On, Moti Lugasi, Lior Madmoni, Anjali Menon, Anurag Kumar, Jacob Donley, Vladimir Tourbabin, Boaz Rafaely

PDF: Audio Signal Processing for Telepresence Based on Wearable Array in Noisy and Dynamic Scenes

Abstract

Telepresence for virtual meetings has gained interest due to recent travel limitations and the new reality of working from home. However, current literature supporting real-world microphone arrays for realistic telepresence in audio is very limited. This paper investigates a scenario of a distant participant joining virtually a meeting between two dynamic participants. The audio signal processing chain (i) starts by recording using an array mounted on glasses, (ii) with initial processing providing direction-of-arrival estimation of a desired speaker using a direct-path dominance test robust to reverberation, combined with speaker separation for improved dynamic localization, (iii) followed by speech enhancement against interfering speakers and noise, (iv) and ends with applying binaural signal matching for headphone listening. This paper compares model-based processing to learning-based processing in both noisy and dynamic scenarios, and presents a novel processing using data from a real wearable array, studied by simulation and a listening test.

您可能还喜欢...

Paper