空 挡 广 告 位 | 空 挡 广 告 位

MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

Note: We don't have the ability to review paper

PubDate: November 25, 2020

Teams: Carnegie Mellon University;Facebook

Writers: Donglai Xiang, Fabian Prada, Chenglei Wu, Jessica Hodgins

PDF: MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

Abstract

We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input. In contrast to the existing literature, our method does not require a pre-scanned personalized mesh template, and thus can be applied to in-the-wild videos. To constrain the output to a valid deformation space, we build statistical deformation models for three types of clothing: T-shirt, short pants and long pants. A differentiable renderer is utilized to align our captured shapes to the input frames by minimizing the difference in both silhouette, segmentation, and texture. We develop a UV texture growing method which expands the visible texture region of the clothing sequentially in order to minimize drift in deformation tracking. We also extract fine-grained wrinkle detail from the input videos by fitting the clothed surface to the normal maps estimated by a convolutional neural network. Our method produces temporally coherent reconstruction of body and clothing from monocular video. We demonstrate successful clothing capture results from a variety of challenging videos. Extensive quantitative experiments demonstrate the effectiveness of our method on metrics including body pose error and surface reconstruction error of the clothing.

您可能还喜欢...

Paper