空 挡 广 告 位 | 空 挡 广 告 位

You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions

Note: We don't have the ability to review paper

PubDate: June 14, 2020

Teams: UC Berkeley, UT Austin, Carnegie Mellon University, Facebook AI Research

Writers: Evonne Ng, Donglai Xiang, Hanbyul Joo, Kristen Grauman

PDF: You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions

Project: You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions

Abstract

The body pose of a person wearing a camera is of great interest for applications in augmented reality, healthcare, and robotics, yet much of the person’s body is out of view for a typical wearable camera. We propose a learning-based approach to estimate the camera wearer’s 3D body pose from egocentric video sequences. Our key insight is to leverage interactions with another person—whose body pose we can directly observe—as a signal inherently linked to the body pose of the first-person subject. We show that since interactions between individuals often induce a well-ordered series of back-and-forth responses, it is possible to learn a temporal model of the interlinked poses even though one party is largely out of view. We demonstrate our idea on a variety of domains with dyadic interaction and show the substantial impact on egocentric body pose estimation, which improves the state of the art.

您可能还喜欢...

Paper