Generating Synthetic Humans for Learning 3D Pose Estimation

Note: We don't have the ability to review paper

PubDate: March 2019

Teams: Tokyo Institute of Technology

Writers: Kohei Aso; Dong-Hyun Hwang; Hideki Koike

PDF: Generating Synthetic Humans for Learning 3D Pose Estimation


Abstract

We generate synthetic annotated data for learning 3D human pose estimation using an egocentric fisheye camera. Synthetic humans are rendered from a virtual fisheye camera, with a random background, random clothing, random lighting parameters. In addition to RGB images, we generate ground truth of 2D/3D poses and location heat-maps. Capturing huge and various images and labeling manually for learning are not required. This approach will be used for the challenging situation such as capturing training data in sports.

You may also like...

Paper