雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Hand pose estimation in object-interaction based on deep learning for virtual reality applications

Note: We don't have the ability to review paper

PubDate: July 2020

Teams: National Taiwan University

Writers: Min-Yu Wu;Pai-Wen Ting;Ya-Hui Tang;En-Te Chou;Li-Chen Fua

PDF: Hand pose estimation in object-interaction based on deep learning for virtual reality applications

Abstract

Hand Pose Estimation aims to predict the position of joints on a hand from an image, and it has become popular because of the emergence of VR/AR/MR technology. Nevertheless, an issue surfaces when trying to achieve this goal, since a hand tends to cause self-occlusion or external occlusion easily as it interacts with external objects. As a result, there have been many projects dedicated to this field for a better solution of this problem. This paper develops a system that accurately estimates a hand pose in 3D space using depth images for VR applications. We propose a data-driven approach of training a deep learning model for hand pose estimation with object interaction. In the convolutional neural network (CNN) training procedure, we design a skeleton-difference loss function, which effectively can learn the physical constraints of a hand. Also, we propose an object-manipulating loss function, which considers knowledge of the hand-object interaction, to enhance performance.

In the experiments we have conducted for hand pose estimation under different conditions, the results validate the robustness and the performance of our system and show that our method is able to predict the joints more accurately in challenging environmental settings. Such appealing results may be attributed to the consideration of the physical joint relationship as well as object information, which in turn can be applied to future VR/AR/MR systems for more natural experience.

您可能还喜欢...

Paper