雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Spatial Sound Scene Synthesis and Manipulation for Virtual Reality and Audio Effects

Note: We don't have the ability to review paper

PubDate: June 2018

Teams: Ville Pulkki; Symeon Delikaris-Manias; Archontis Politis

Writers: Ville Pulkki; Symeon Delikaris-Manias; Archontis Politis

PDF: Spatial Sound Scene Synthesis and Manipulation for Virtual Reality and Audio Effects

Abstract

This chapter covers DirAC‐based parametric time‐frequency domain (TF domain) audio techniques where the virtual sound scene is synthesized based on a geometric description of it around the avatar. Time‐frequency domain parametric spatial audio techniques also have applications in audio engineering, in addition to applications in virtual reality. The basic processing block of DirAC‐based spatialization in virtual worlds is the DirAC monosynth. When producing audio for virtual reality, often some recorded speech, music, and/or environmental sounds are used as signals for virtual sources. The chapter presents the application of parametric analysis and synthesis to spatial audio effects, using the DirAC framework. The ambience extraction method attempts to make foreground sounds with a clear direction softer, while preserving background sounds or ambience. Spatialization of a monophonic audio channel, either to a multichannel audio reproduction setup or as background sound in virtual reality, is a useful task in many audio engineering applications.

您可能还喜欢...

Paper