雨果巴拉:行业北极星Vision Pro过度设计不适合市场

GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping

Note: We don't have the ability to review paper

PubDate: Sep 2022

Teams:  Max Planck Institute for Intelligent Systems

Writers: Omid Taheri; Vasileios Choutas; Michael J. Black; Dimitrios Tzionas

PDF:GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping

Abstract

Generating digital humans that move realistically has many applications and is widely studied, but existing meth-odsfocus on the major limbs of the body, ignoring the hands and head. Hands have been separately studied, but the fo-cus has been on generating realistic static grasps of objects. To synthesize virtual characters that interact with the world, we need to generate full-body motions and realistic hand grasps simultaneously. Both sub-problems are challenging on their own and, together, the state space of poses is sig-nificantly larger, the scales of hand and body motions dif-fer, and the whole-body posture and the hand grasp must agree, satisfy physical constraints, and be plausible. Additionally, the head is involved because the avatar must look at the object to interact with it. For the first time, we ad-dress the problem of generating full-body, hand and head motions of an avatar grasping an unknown object. As in-put, our method, called GOAL, takes a 3D object, its pose, and a starting 3D body pose and shape. GOAL outputs a sequence of whole-body poses using two novel networks. First, GNet generates a goal whole-body grasp with a re-alistic body, head, arm, and hand pose, as well as hand-object contact. Second, MNet generates the motion be-tween the starting and goal pose. This is challenging, as it requires the avatar to walk towards the object with foot-ground contact, orient the head towards it, reach out, and grasp it with a realistic hand pose and hand-object con-tact. To achieve this the networks exploit a representation that combines SMPL-X body parameters and 3D vertex off-sets. We train and evaluate GOAL, both qualitatively and quantitatively, on the GRAB dataset. Results show that GOAL generalizes well to unseen objects, outperforming baselines. A perceptual study shows that GOAL’s gener-ated motions approach the realism of GRAB’s ground truth. GOAL takes a step towards generating realistic full-body object grasping motion. Our models and code ar…

您可能还喜欢...

Paper