雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Audio-Visual Perception of Omnidirectional Video for Virtual Reality Applications

Note: We don't have the ability to review paper

PubDate: June 2020

Teams: Trinity College Dublin;INSA Rennes

Writers: Fang-Yi Chao; Cagri Ozcinar; Chen Wang; Emin Zerman; Lu Zhang; Wassim Hamidouche; Olivier Deforges; Aljosa Smolic

PDF: Audio-Visual Perception of Omnidirectional Video for Virtual Reality Applications

Abstract

Ambisonics, which constructs a sound distribution over the full viewing sphere, improves immersive experience in omnidirectional video (ODV) by enabling observers to perceive the sound directions. Thus, human attention could be guided by audio and visual stimuli simultaneously. Numerous datasets have been proposed to investigate human visual attention by collecting eye fixations of observers navigating ODV with head-mounted displays (HMD). However, there is no such dataset analyzing the impact of audio information. In this paper, we establish a new audio-visual attention dataset for ODV with mute, mono, and ambisonics. The user behavior including visual attention corresponding to sound source locations, viewing navigation congruence between observers and fixations distributions in these three audio modalities is studied based on video and audio content. From our statistical analysis, we preliminarily found that, compared to only perceiving visual cues, perceiving visual cues with salient object sound (i.e., human voice, siren of ambulance) could draw more visual attention to the objects making sound and guide viewing behaviour when such objects are not in the current field of view. The more in-depth interactive effects between audio and visual cues in mute, mono and ambisonics still require further comprehensive study. The dataset and developed testbed in this initial work will be publicly available with the paper to foster future research on audio-visual attention for ODV.

您可能还喜欢...

Paper