Pupil Center Detection Based on the UNet for the User Interaction in VR and AR Environments
PubDate: August 2019
Teams: Seoul National Univ
Writers: Sang Yoon Han; Yoonsik Kim; Sang Hwa Lee; Nam Ik Cho
Finding the location of a pupil center is important for the human-computer interaction especially for the user interface in AR/VR devices. In this paper, we propose an indirect use of the convolutional neural network (CNN) for the task, which first segments the pupil region by a CNN, and then finds the center of mass of the region. For this, we create a dataset by labeling the pupil area on 111,581 images from 29 IR video sequences. We also label the pupil region of widely used datasets to test and validate our method on a variety of inputs. Experiments show that the proposed method provides better accuracies than the conventional ones, showing robustness to the noise.