空 挡 广 告 位 | 空 挡 广 告 位

The Prediction of Saliency Map for Head and Eye Movements in 360 Degree Images

Note: We don't have the ability to review paper

PubDate: December 2019

Teams: Shanghai Jiao Tong University;University of Macau

Writers: Yucheng Zhu; Guangtao Zhai; Xiongkuo Min; Jiantao Zhou

PDF: The Prediction of Saliency Map for Head and Eye Movements in 360 Degree Images

Abstract

By recording the whole scene around the capturer, virtual reality (VR) techniques can provide viewers the sense of presence. To provide a satisfactory quality of experience, there should be at least 60 pixels per degree, so the resolution of panoramas should reach 21600 × 10800. The huge amount of data will put great demands on data processing and transmission. However, when exploring in the virtual environment, viewers only perceive the content in the current field of view (FOV). Therefore if we can predict the head and eye movements which are important behaviors of viewer, more processing resources can be allocated to the active FOV. But conventional saliency prediction methods are not fully adequate for panoramic images. In this paper, a new panorama-oriented model, to predict head and eye movements, is proposed. Due to the superiority of computation in the spherical domain, the spherical harmonics are employed to extract features at different frequency bands and orientations. Related low- and high-level features including the rare components in the frequency domain and color domain, the difference between center vision and peripheral vision, visual equilibrium, person and car detection, and equator bias are extracted to estimate the saliency. To predict head movements, visual mechanisms including visual uncertainty and equilibrium are incorporated, and the graphical model and functional representation for the switch of head orientation are established. Extensive experimental results on the publicly available database demonstrate the effectiveness of our methods.

您可能还喜欢...

Paper