空 挡 广 告 位 | 空 挡 广 告 位

Foveated Rendering: Motivation, Taxonomy, and Research Directions

Note: We don't have the ability to review paper

PubDate: May 2022

Teams: University of Maryland

Writers: Soshi Shimada, Vladislav Golyanik, Patrick Pérez, Weipeng Xu, Christian Theobalt

PDF: Foveated Rendering: Motivation, Taxonomy, and Research Directions

Abstract

Marker-less monocular 3D human motion capture (MoCap) with scene interactions is a challenging research topic relevant for extended reality, robotics and virtual avatar generation. Due to the inherent depth ambiguity of monocular settings, 3D motions captured with existing methods often contain severe artefacts such as incorrect body-scene inter-penetrations, jitter and body floating. To tackle these issues, we propose HULC, a new approach for 3D human MoCap which is aware of the scene geometry. HULC estimates 3D poses and dense body-environment surface contacts for improved 3D localisations, as well as the absolute scale of the subject. Furthermore, we introduce a 3D pose trajectory optimisation based on a novel pose manifold sampling that resolves erroneous body-environment inter-penetrations. Although the proposed method requires less structured inputs compared to existing scene-aware monocular MoCap algorithms, it produces more physically-plausible poses: HULC significantly and consistently outperforms the existing approaches in various experiments and on different metrics.

您可能还喜欢...

Paper