Embedding Gesture Prior to Joint Shape Optimization Based Real-Time 3D Hand Tracking
PubDate: February 2020
Teams: Beihang University
Writers: Yunlong Che; Yue Qi
PDF: Embedding Gesture Prior to Joint Shape Optimization Based Real-Time 3D Hand Tracking
Abstract
In this paper, we present a novel approach for 3D hand tracking in real-time from a set of depth images. In each frame, our approach initializes hand pose with learning and then jointly optimizes the hand pose and shape. For pose initialization, we propose a gesture classification and root location network (GCRL), which can capture the meaningful topological structure of the hand to estimate the gesture and root location of the hand. With the per-frame initialization, our approach can rapidly recover from tracking failures. For optimization, unlike most existing methods that have been using a fixed-size hand model or manual calibration, we propose a hand gesture-guided optimization strategy to estimate pose and shape iteratively, which makes the tracking results more accuracy. Experiments on three challenging datasets show that our proposed approach achieves similar accuracy as state-of-the-art approaches, while runs on a low computational resource (without GPU).