雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Sensory Fusion and Intent Recognition for Accurate Gesture Recognition in Virtual Environments

Note: We don't have the ability to review paper

PubDate: November 2018

Teams: University of Houston-Victoria;Rice University;University of Nevada

Writers: Sean SimmonsKevin ClarkAlireza TavakkoliDonald Loffredo

PDF: Sensory Fusion and Intent Recognition for Accurate Gesture Recognition in Virtual Environments

Abstract

With the rapid growth of Virtual Reality applications, there is a significant need to bridge the gap between the real world and the virtual environment in which humans are immersed. Activity recognition will be an important factor in delivering models of human actions and operations into the virtual environments. In this paper, we define an activity as being composed of atomic gestures and intents. With this approach, the proposed algorithm detects predefined activities utilizing the fusion of multiple sensors. First, data is collected from both vision and wearable sensors to train Recurrent Neural Networks (RNN) for the detection of atomic gestures. Then, sequences of the gestures, as observable states, are labeled with their associated intents. These intents denote hidden states, and the sequences are used to train and test Hidden Markov Models (HMM). Each HMM is representative of a single activity. Upon testing, the proposed gesture recognition system achieves around 90% average accuracy with 95% mean confidence. The overall activity recognition performs at an average of 89% accuracy for simple and complex activities.

您可能还喜欢...

Paper