雨果巴拉:行业北极星Vision Pro过度设计不适合市场

VR Sickness Prediction from Integrated HMD’s Sensors using Multimodal Deep Fusion Network

Note: We don't have the ability to review paper

PubDate: Aug 2021

Teams: The University of Texas at San Antonio

Writers: Rifatul Islam, Kevin Desai, John Quarles

PDF: VR Sickness Prediction from Integrated HMD’s Sensors using Multimodal Deep Fusion Network

Abstract

Virtual Reality (VR) sickness commonly known as cybersickness is one of the major problems for the comfortable use of VR systems. Researchers have proposed different approaches for predicting cybersickness from bio-physiological data (e.g., heart rate, breathing rate, electroencephalogram). However, collecting bio-physiological data often requires external sensors, limiting locomotion and 3D-object manipulation during the virtual reality (VR) experience. Limited research has been done to predict cybersickness from the data readily available from the integrated sensors in head-mounted displays (HMDs) (e.g., head-tracking, eye-tracking, motion features), allowing free locomotion and 3D-object manipulation. This research proposes a novel deep fusion network to predict cybersickness severity from heterogeneous data readily available from the integrated HMD sensors. We extracted 1755 stereoscopic videos, eye-tracking, and head-tracking data along with the corresponding self-reported cybersickness severity collected from 30 participants during their VR gameplay. We applied several deep fusion approaches with the heterogeneous data collected from the participants. Our results suggest that cybersickness can be predicted with an accuracy of 87.77\% and a root-mean-square error of 0.51 when using only eye-tracking and head-tracking data. We concluded that eye-tracking and head-tracking data are well suited for a standalone cybersickness prediction framework.

您可能还喜欢...

Paper