Dense-Pose2SMPL: 3D Human Body Shape Estimation From a Single and Multiple Images and Its Performance Study
PubDate: July 2022
Teams: National University of Science and Technology, Seoul, South Korea
Writers: Dongjun Gu; Youngsik Yun; Thai Thanh Tuan; Heejune Ahn
Abstract
The shape and pose estimation of a human body are essential for human behavior analysis, sports and medical analysis, and virtual reality. Although 2D image data are much easier to acquire than 3D scan data, the estimation accuracy using 2D images is still far below the 3D scanning methods’ ones. In this paper, we propose a 2D image-based human body estimation method appropriate for body shape and size measurement. The proposed method uses SMPL (Skinned Multi-Person Linear model), a human body model, and rich correspondences of Dense-Pose network so named Dense-Pose2SMPL, which estimates the SMPL parameters estimation by minimizing the re-projection error of the correspondences between the pixels in the human image and 3D surface points in SMPL. The previous SMPL parameter estimation methods use the sparse joint correspondence so that they show very limited performance in shape estimation. We compare the body measurement accuracy of Dense-Pose2SMPL with a single human image with SMPLify, a joint-based estimation method, and DecoMR, a Dense-Pose-based neural network regression method. The experiments results show a dramatic improvement of Dense-Pose2SMP over SMPLify and DecoMR. In the circumference estimation error, over 30 percent decrease for the overweight and underweight and 10 to 20 percent decrease on average. We also provide an analysis of the effects of the input conditions: the subject’s BMI level, pose, clothing style, and camera viewpoint. A-pose, side (profile) camera views and minimal and tight clothing style show higher accuracy than other conditions. Finally, the extension with multiple (two) images can give around 15 percent more improvement in body size estimation accuracy over the single image-based method.