空 挡 广 告 位 | 空 挡 广 告 位

Synthetic Video Generation for Robust Hand Gesture Recognition in Augmented Reality Applications

Note: We don't have the ability to review paper

PubDate: Dec 2019

Teams: Carnegie Mellon University;IIIT Delhi;TCS Research

Writers: Varun Jain, Shivam Aggarwal, Suril Mehta, Ramya Hebbalaguppe

PDF: Synthetic Video Generation for Robust Hand Gesture Recognition in Augmented Reality Applications

Abstract

Hand gestures are a natural means of interaction in Augmented Reality and Virtual Reality (AR/VR) applications. Recently, there has been an increased focus on removing the dependence of accurate hand gesture recognition on complex sensor setup found in expensive proprietary devices such as the Microsoft HoloLens, Daqri and Meta Glasses. Most such solutions either rely on multi-modal sensor data or deep neural networks that can benefit greatly from abundance of labelled data. Datasets are an integral part of any deep learning based research. They have been the principal reason for the substantial progress in this field, both, in terms of providing enough data for the training of these models, and, for benchmarking competing algorithms. However, it is becoming increasingly difficult to generate enough labelled data for complex tasks such as hand gesture recognition. The goal of this work is to introduce a framework capable of generating photo-realistic videos that have labelled hand bounding box and fingertip that can help in designing, training, and benchmarking models for hand-gesture recognition in AR/VR applications. We demonstrate the efficacy of our framework in generating videos with diverse backgrounds.

您可能还喜欢...

Paper