空 挡 广 告 位 | 空 挡 广 告 位

NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze Estimation

Note: We don't have the ability to review paper

Title: NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze Estimation

Teams: Nvidia;UNC

Writers: Joohwan Kim;Michael Stengel;Alexander Majercik (Nvidia);Shalini De Mello;David Dunn (UNC);Samuli Laine;Morgan McGuire;David Luebke

Publication date: May 4, 2019

Abstract

Quality, diversity, and size of training dataset are critical factors for learning-based gaze estimators. We create two datasets satisfying these criteria for near-eye gaze estimation under infrared illumination: a synthetic dataset using anatomically-informed eye and face models with variations in face shape, gaze direction, pupil and iris, skin tone, and external conditions (two million images at 1280x960), and a real-world dataset collected with 35 subjects (2.5 million images at 640x480). Using our datasets, we train a neural network for gaze estimation, achieving 2.06 (+/- 0.44) degrees of accuracy across a wide 30 x 40 degrees field of view on real subjects excluded from training and 0.5 degrees best-case accuracy (across the same field of view) when explicitly trained for one real subject. We also train a variant of our network to perform pupil estimation, showing higher robustness than previous methods. Our network requires fewer convolutional layers than previous networks, achieving sub-millisecond latency.

您可能还喜欢...

Paper