Partially occluded facial action recognition and interaction in virtual reality applications
PubDate: August 2017
Teams: Binghamton University；Apollo Box Inc.
Writers: UmurAybars Ciftci ; Xing Zhang ; Lijun Tin
The proliferation of affordable virtual reality (VR) head mounted displays (HMD) provides users with realistic immersive visual experiences. However, HMDs occlude upper half of a user’s face and prevent the facial action recognition from the entire face. Therefore, entire face cannot be used as a source of feedback for more interactive virtual reality applications. To tackle this problem, we propose a new depth based recognition framework that recognizes mouth gestures and uses those recognized mouth gestures as a medium of interaction within virtual reality in real-time. Our system uses a new 3D edge map approach to describe mouth features, and further classifies those features into seven different gesture classes. The accuracy of the proposed mouth gesture framework is evaluated in user independent tests and achieved high correct recognition rates. The system has also been demonstrated and validated through a real-time virtual reality application.