Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs

Note: We don't have the ability to review paper

PubDate: August 13, 2020

Teams: Facebook Reality Labs

Writers: Zamir Ben-Hur, David Lou Alon, Philip W. Robinson, Ravish Mehra

PDF: Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs

Localization of Virtual Sounds in Dynamic Listening Using Sparse HRTFs

Abstract

Reproduction of virtual sound sources that are perceptually indistinguishable from real-world sounds is impossible without accurate representation of the virtual sound source location. A key component in such a reproduction system is the Head-Related Transfer Function (HRTF), which is different for every individual. In this study, we introduce an experimental setup for accurate evaluation of the localization performance using a spatial sound reproduction system in dynamic listening conditions. The setup offers the possibility of comparing the evaluation results with real-world localization performance, and facilitates testing of different virtual reproduction conditions, such as different HRTFs or different representations and interpolation methods of the HRTFs. Localization experiments are conducted, comparing real-world sound sources with virtual sound sources using high-resolution individual HRTFs, sparse individual HRTFs and a generic HRTF.

您可能还喜欢...

Paper