A comparison of object-based and scene-based compression in virtual reality
PubDate: Oct 2022
Teams: Reality Labs;Data Science and Statistics, Toronto;McMaster University
Writers: Joanna Luberadzka, Christi Miller, Andy Muehlhausen, Jeff Crukley, Melinda Anderson, Shaikat Hossain, Thomas Lunner
PDF: A comparison of object-based and scene-based compression in virtual reality
Abstract
Despite technological advances, the main function of hearing aids remains sound amplification. To avoid overamplification, hearing aids use wide dynamic range compression (WDRC), which allows for controlling the amount of gain depending on the input sound level. As a side effect of these non-linear modifications, WDRC introduces undesired distortions, especially when applied to sound mixtures. In this study, we introduce an alternative approach, in which individual sound objects are separated prior to compression. Although the potential benefit of such processing has been discussed previously, perceptual evidence has not been investigated to date. We created a virtual reality (VR)-based listening experiment in which conventional, scene-based WDRC is compared with the proposed object-based WDRC, via measures of speech intelligibility, listening effort, comfort, and preference. Acoustic scene analysis is designed to capture the known detrimental effects of conventional WDRC: level fluctuations of the target signal interrupted by a competing masker; lack of adequate target gain; when the target signal is buried in noise; and decreased long-term signal-to-noise ratio (SNR) due to amplification of low-level background noise. Preference ratings and objective intelligibility scores indicated benefit and motivate further development of object-based approaches.