T3M: Text Guided 3D Human Motion Synthesis from Speech
PubDate: Aug 2024
Teams:OpenGVLab, Shanghai AI Laboratory;New York University
Writers:Wenshuo Peng, Kaipeng Zhang, Sai Qian Zhang
PDF:T3M: Text Guided 3D Human Motion Synthesis from Speech
Abstract
Speech-driven 3D motion synthesis seeks to create lifelike animations based on human speech, with potential uses in virtual reality, gaming, and the film production. Existing approaches reply solely on speech audio for motion generation, leading to inaccurate and inflexible synthesis results. To mitigate this problem, we introduce a novel text-guided 3D human motion synthesis method, termed \textit{T3M}. Unlike traditional approaches, T3M allows precise control over motion synthesis via textual input, enhancing the degree of diversity and user customization. The experiment results demonstrate that T3M can greatly outperform the state-of-the-art methods in both quantitative metrics and qualitative evaluations.