空 挡 广 告 位 | 空 挡 广 告 位

Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification

Note: We don't have the ability to review paper

PubDate: February 2020

Teams: Microsoft Research;University College London

Writers: Mar Gonzalez-Franco; Anthony Steed; Steve Hoogendyk; Eyal Ofek

PDF: Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification

Abstract

Through avatar embodiment in Virtual Reality (VR) we can achieve the illusion that an avatar is substituting our body: the avatar moves as we move and we see it from a first person perspective. However, self-identification, the process of identifying a representation as being oneself, poses new challenges because a key determinant is that we see and have agency in our own face. Providing control over the face is hard with current HMD technologies because face tracking is either cumbersome or error prone. However, limited animation is easily achieved based on speaking. We investigate the level of avatar enfacement, that is believing that a picture of a face is one’s own face, with three levels of facial animation: (i) one in which the facial expressions of the avatars are static, (ii) one in which we implement lip-sync motion and (iii) one in which the avatar presents lip-sync plus additional facial animations, with blinks, designed by a professional animator. We measure self-identification using a face morphing tool that morphs from the face of the participant to the face of a gender matched avatar. We find that self-identification on avatars can be increased through pre-baked animations even when these are not photorealistic nor look like the participant.

您可能还喜欢...

Paper