空 挡 广 告 位 | 空 挡 广 告 位

Eye Fixation Versus Pupil Diameter as Eye-Tracking Features for Virtual Reality Emotion Classification

Note: We don't have the ability to review paper

PubDate: January 2022

Teams: Universiti Malaysia Sabah;

Writers: Lim Jia Zheng; James Mountstephens; Jason Teo

PDF: Eye Fixation Versus Pupil Diameter as Eye-Tracking Features for Virtual Reality Emotion Classification

Abstract

The usage of eye-tracking technology is becoming increasingly popular in machine learning applications, particularly in the area of affective computing and emotion recognition. Typically, emotion recognition studies utilize popular physiological signals such as electroencephalography (EEG), while the research on emotion detection that relies solely on eye-tracking data is limited. In this study, an empirical comparison of the accuracy of eye-tracking-based emotion recognition in a virtual reality (VR) environment using eye fixation versus pupil diameter as the classification feature is performed. We classified emotions into four distinct classes according to Russell’s four-quadrant Circumplex Model of Affect. 360° videos are presented as emotional stimuli to participants in a VR environment to evoke the user’s emotions. Three separate experiments were conducted using Support Vector Machines (SVMs) as the classification algorithm for the two chosen eye features. The results showed that emotion classification using fixation position obtained an accuracy of 75% while pupil diameter obtained an accuracy of 57%. For four-quadrant emotion recognition, eye fixation as a learning feature produces better classification accuracy compared to pupil diameter. Therefore, this empirical study has shown that eye-tracking-based emotion recognition systems would benefit from using features based on eye fixation data rather than pupil size.

您可能还喜欢...

Paper