空 挡 广 告 位 | 空 挡 广 告 位

Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement

Note: We don't have the ability to review paper

PubDate: Jul 2020

Teams: The Chinese University of Hong Kong, T Stone Robotics Institute of CUHK

Writers: Qiang Nie, Ziwei Liu, Yunhui Liu

PDF: Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement

Project: Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement

Abstract

Learning a good 3D human pose representation is important for human pose related tasks, e.g. human 3D pose estimation and action recognition. Within all these problems, preserving the intrinsic pose information and adapting to view variations are two critical issues. In this work, we propose a novel Siamese denoising autoencoder to learn a 3D pose representation by disentangling the pose-dependent and view-dependent feature from the human skeleton data, in a fully unsupervised manner. These two disentangled features are utilized together as the representation of the 3D pose. To consider both the kinematic and geometric dependencies, a sequential bidirectional recursive network (SeBiReNet) is further proposed to model the human skeleton data. Extensive experiments demonstrate that the learned representation 1) preserves the intrinsic information of human pose, 2) shows good transferability across datasets and tasks. Notably, our approach achieves state-of-the-art performance on two inherently different tasks: pose denoising and unsupervised action recognition. Code and models are available at: \url{this https URL}

您可能还喜欢...

Paper