空 挡 广 告 位 | 空 挡 广 告 位

Panonut360: A Head and Eye Tracking Dataset for Panoramic Video

Note: We don't have the ability to review paper

Date:April 2024

Teams:Huazhong University of Science and Technology Wuhan

Writers:Yutong Xu,Junhao Du,Jiahe Wang,Yuwei Ning,Sihan Zhou,Yang Cao

PDF:Panonut360: A Head and Eye Tracking Dataset for Panoramic Video

Abstract

With the rapid development and widespread application of VR/AR technology, maximizing the quality of immersive panoramic video services that match users' personal preferences and habits has become a long-standing challenge. Understanding the saliency region where users focus, based on data collected with HMDs (Head-mounted Displays), can promote multimedia encoding, transmission, and quality assessment. At the same time, large-scale datasets are essential for researchers and developers to explore short/long-term user behavior patterns and train AI models related to panoramic videos. However, existing panoramic video datasets often include low-frequency user head or eye movement data through short-term videos only, lacking sufficient data for analyzing users' Field of View (FoV) and generating video saliency regions.

Driven by these practical factors, in this paper, we present a head and eye tracking dataset involving 50 users (25 males and 25 females) watching 15 panoramic videos (mostly in 4K). The dataset provides details on the viewport and gaze attention locations of users. Besides, we present some statistics samples extracted from the dataset. For example, the deviation between head and eye movements challenges the widely held assumption that gaze attention decreases from the center of the FoV following a Gaussian distribution. Our analysis reveals a consistent downward offset in gaze fixations relative to the FoV in experimental settings involving multiple users and videos. That's why we name the dataset Panonut, a saliency weighting shaped like a donut. Finally, we also provide a script that generates saliency distributions based on given head or eye coordinates and pre-generated saliency distribution map sets of each video from the collected eye tracking data.

The dataset and related code are publicly available on our website: https://dianvrlab.github.io/Panonut360/.

您可能还喜欢...

Paper