空 挡 广 告 位 | 空 挡 广 告 位

Eyemotion: Classifying facial expressions in VR using eye-tracking cameras

Note: We don't have the ability to review paper

Title: Eyemotion: Classifying facial expressions in VR using eye-tracking cameras

Teams: Georgia Institute of Technology Google

Writers: Steven Hickson Nick Dufour Avneesh Sud Vivek Kwatra Irfan Essa

Publication date: July 2017

Abstract

One of the main challenges of social interaction in virtual reality settings is that head-mounted displays occlude a large portion of the face, blocking facial expressions and thereby restricting social engagement cues among users. Hence, auxiliary means of sensing and conveying these expressions are needed. We present an algorithm to automatically infer expressions by analyzing only a partially occluded face while the user is engaged in a virtual reality experience. Specifically, we show that images of the user’s eyes captured from an IR gaze-tracking camera within a VR headset are sufficient to infer a select subset of facial expressions without the use of any fixed external camera. Using these inferences, we can generate dynamic avatars in real-time which function as an expressive surrogate for the user. We propose a novel data collection pipeline as well as a novel approach for increasing CNN accuracy via personalization. Our results show a mean accuracy of 74% (F1 of 0.73) among 5 ‘emotive’ expressions and a mean accuracy of 70% (F1 of 0.68) among 10 distinct facial action units.

您可能还喜欢...

Paper