A Deep Motion Sickness Predictor Induced by Visual Stimuli in Virtual Reality

Note: We don't have the ability to review paper

PubDate: October 2020

Teams: Yonsei University;Electronics and Telecommunications Research Institute

Writers: Jinwoo Kim; Heeseok Oh; Woojae Kim; Seonghwa Choi; Wookho Son; Sanghoon Lee

PDF: A Deep Motion Sickness Predictor Induced by Visual Stimuli in Virtual Reality


In a virtual reality (VR) environment, where visual stimuli predominate over other stimuli, the user experiences cybersickness because the balance of the body collapses due to self-motion. Accordingly, the VR experience is accompanied by unavoidable sickness referred to as visually induced motion sickness (VIMS). In this article, our primary purpose is to simultaneously estimate the VIMS score by referring to the content and calculate the temporally induced VIMS sensitivity. To seek our goals, we propose a novel architecture composed of two consecutive networks: 1) neurological representation and 2) spatiotemporal representation. In the first stage, the network imitates and learns the neurological mechanism of motion sickness. In the second stage, the significant feature of the spatial and temporal domains is expressed over the generated frames. After the training procedure, our model can calculate VIMS sensitivity for each frame of the VR content by using the weakly supervised approach for unannotated temporal VIMS scores. Furthermore, we release a massive VR content database. In the experiments, the proposed framework demonstrates excellent performance for VIMS score prediction compared with existing methods, including feature engineering and deep learning-based approaches. Furthermore, we propose a way to visualize the cognitive response to visual stimuli and demonstrate that the induced sickness tends to be activated in a similar tendency, as done in clinical studies.