Early Prediction of Cybersickness in Virtual Reality Using a Large Language Model for Multimodal Time Series Data
PubDate: Otc 2024
Teams:Hanyang University
Writers:Yoonseon Choi, Dayoung Jeong, Bogoan Kim, Kyungsik Han
Abstract
Cybersickness in virtual reality (VR) significantly disrupts user immersion. Although recent studies have proposed cybersickness prediction models, existing models have considered the moment of cybersickness onset, limiting their applicability in proactive detection. To address this limitation, we used long-term time series forecasting (LTSF) models based on multimodal sensor data collected from the head-mounted display (HMD). We used a pre-trained large language model (LLM) to effectively learn the salient features (e.g., seasonality) of multimodal sensor data by understanding the nuanced context within the data. The results of our experiment demonstrated that our model achieved comparable performance to the baseline models, with an MAE of 0.971 and an RMSE of 1.696. This indicates the potential for early prediction of cybersickness by employing LLM- and LTSF-based models with multimodal sensor data, suggesting a new direction in model development.