空 挡 广 告 位 | 空 挡 广 告 位

Neural Networks for Semantic Gaze Analysis in XR Settings

Note: We don't have the ability to review paper

PubDate: Mar 2021

Teams: University of Kassel

Writers: Lena Stubbemann, Dominik Dürrschnabel, Robert Refflinghaus

PDF: Neural Networks for Semantic Gaze Analysis in XR Settings

Abstract

Virtual-reality (VR) and augmented-reality (AR) technology is increasingly combined with eye-tracking. This combination broadens both fields and opens up new areas of application, in which visual perception and related cognitive processes can be studied in interactive but still well controlled settings. However, performing a semantic gaze analysis of eye-tracking data from interactive three-dimensional scenes is a resource-intense task, which so far has been an obstacle to economic use. In this paper we present a novel approach which minimizes time and information necessary to annotate volumes of interest (VOIs) by using techniques from object recognition. To do so, we train convolutional neural networks (CNNs) on synthetic data sets derived from virtual models using image augmentation techniques. We evaluate our method in real and virtual environments, showing that the method can compete with state-of-the-art approaches, while not relying on additional markers or preexisting databases but instead offering cross-platform use.

您可能还喜欢...

Paper