空 挡 广 告 位 | 空 挡 广 告 位

An Introduction to the Speech Enhancement for Augmented Reality (Spear) Challenge

Note: We don't have the ability to review paper

PubDate: Sep 2022

Teams: Imperial College London,Meta

Writers: Pierre Guiraud, Sina Hafezi, Patrick A. Naylor, Alastair H. Moore, Jacob Donley, Vladimir Tourbabin, Thomas Lunner

PDF: An Introduction to the Speech Enhancement for Augmented Reality (Spear) Challenge

Abstract

It is well known that microphone arrays can be used to enhance a target speaker in a noisy, reverberant environment, with both spatial (e.g. beamforming) and statistical (e.g. source separation) methods proving effective. Head-worn microphone arrays inherently sample a sound field from an egocentric perspective — when the head moves the apparent direction of even static sound sources change with respect to the array. Traditionally, enhancement algorithms have aimed at being robust to head motion but hearable devices and augmented reality (AR) headsets/glasses contain additional sensors which offer the potential to adapt to, or even exploit, head motion. The recently released EasyCom database contains microphone array recordings of group conversations made in a realistic restaurant-like acoustic scene. In addition to egocentric recordings made with AR glasses, extensive metadata, including the position and orientation of speakers, is provided. This paper describes the use and adaptation of EasyCom for a new IEEE SPS Data Challenge.

您可能还喜欢...

Paper