Image-based hand pose classification using faster R-CNN
PubDate: December 2017
Teams: Incheon National University
Writers: Young-Jun Son; Ouk Choi
Recently, augmented reality and virtual reality (AR/VR) have been commercialized in game, industry and education fields. For the interaction of a human with virtual objects, hand pose is estimated by using remote controllers or depth sensors. However, using the controllers or sensors are inconvenient or impossible in outdoor environments. AR/VR devices such as smart phones and glasses are equipped with cameras, which are ready to be used in outdoor environments. For such devices to be controlled with human hands in outdoor environments, we propose an image-based hand-pose classification method based on Faster R-CNN. For the training and test of the Faster R-CNN, we newly collected a hand pose dataset with 111,362 images. The dataset consists of images of left hands, which are flipped to generate right hand images, resulting in 6 different classes. Segmented hand regions and their classes are also provided with the dataset. Our model shows mean Average Precision of 95%.