空 挡 广 告 位 | 空 挡 广 告 位

Learn2Smile: Learning Non-Verbal Interaction Through Observation

Note: We don't have the ability to review paper

Title: Learn2Smile: Learning Non-Verbal Interaction Through Observation

Teams: Facebook AI Research

Writers: Will Feng, Anitha Kanna, Georgia Gkioxari, C. Lawrence Zitnick

Publication date: Sep 1, 2017

Abstract

In this paper, we explore how to interact with humans using visual expression cues. In particular, we learn to predict appropriate responses to a user’s facial expressions through observation . For this, we use hundreds of videos with pairs of people engaging in a conversation without any external human supervision. This is unlike previous approaches that explicitly label emotional states, such as happy, sad, surprised etc. In our work, we train a deep neural network to predict the agent’s expressions conditioned on the user’s expressions, while eliminating the expensive step of manual supervision. No doubt, our approach could be further improved by using contextual cues from the conversational content of the interaction, e.g. in the form of audio signal or transcribed text. However, in this work we choose to focus on the direct expression-to-expression approach, to serve as a fundamental building block of the final system

您可能还喜欢...

Paper