雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Spatial-Temporal Editing For Dynamic Hair Data

Note: We don't have the ability to review paper

PubDate: May 2019

Teams: Beihang University;Beihang University Qingdao Research Institute

Writers: Yijie Wu; Yongtang Bao; Yue Qi

PDF:

Abstract

Hair plays a unique role in depicting a person’s character. Currently, most hair simulation techniques require a lot of computation time, or rely on complex capture settings. Editing and reusing of existing hair model data are very important topics in computer graphics. In this paper, we present a spatialtemporal editing technique for dynamic hair data. This method can generate a longer or even infinite length sequence of hair motion according to its motion trend from a short input. Firstly, we build spatial-temporal neighborhood information about input hair data. We then initialize the output according to the input exemplar and output constraints, and optimize the output through iterative search and assignment steps. To make the method be more efficient, we select a sparse part of the hair as the guide hair to simplify the model, and interpolate a full set of hair after the synthesis. Results show that our method can deal with a variety of hairstyles and different way of motions.

您可能还喜欢...

Paper