空 挡 广 告 位 | 空 挡 广 告 位

RIT-Eyes: Rendering of near-eye images for eye-tracking applications

Note: We don't have the ability to review paper

PubDate: Jun 2020

Teams: Rochester Institute of Technology

Writers: Nitinraj Nair, Rakshit Kothari, Aayush K. Chaudhary, Zhizhuo Yang, Gabriel J. Diaz, Jeff B. Pelz, Reynold J. Bailey

PDF: RIT-Eyes: Rendering of near-eye images for eye-tracking applications

Project: RIT-Eyes: Rendering of near-eye images for eye-tracking applications

Abstract

Deep neural networks for video-based eye tracking have demonstrated resilience to noisy environments, stray reflections, and low resolution. However, to train these networks, a large number of manually annotated images are required. To alleviate the cumbersome process of manual labeling, computer graphics rendering is employed to automatically generate a large corpus of annotated eye images under various conditions. In this work, we introduce a synthetic eye image generation platform that improves upon previous work by adding features such as an active deformable iris, an aspherical cornea, retinal retro-reflection, gaze-coordinated eye-lid deformations, and blinks. To demonstrate the utility of our platform, we render images reflecting the represented gaze distributions inherent in two publicly available datasets, NVGaze and OpenEDS. We also report on the performance of two semantic segmentation architectures (SegNet and RITnet) trained on rendered images and tested on the original datasets.

您可能还喜欢...

Paper