Enhanced Fine-Grained Motion Diffusion for Text-Driven Human Motion Synthesis

Authors

  • Dong Wei School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Xiaoning Sun School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Huaijiang Sun School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Shengxiang Hu School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Bin Li Tianjin AiForward Science and Technology Co., Ltd., Tianjin, China
  • Weiqing Li School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
  • Jianfeng Lu School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China

DOI:

https://doi.org/10.1609/aaai.v38i6.28401

Keywords:

CV: Biometrics, Face, Gesture & Pose, CV: Language and Vision, CV: Vision for Robotics & Autonomous Driving, HAI: Applications

Abstract

The emergence of text-driven motion synthesis technique provides animators with great potential to create efficiently. However, in most cases, textual expressions only contain general and qualitative motion descriptions, while lack fine depiction and sufficient intensity, leading to the synthesized motions that either (a) semantically compliant but uncontrollable over specific pose details, or (b) even deviates from the provided descriptions, bringing animators with undesired cases. In this paper, we propose DiffKFC, a conditional diffusion model for text-driven motion synthesis with KeyFrames Collaborated, enabling realistic generation with collaborative and efficient dual-level control: coarse guidance at semantic level, with only few keyframes for direct and fine-grained depiction down to body posture level. Unlike existing inference-editing diffusion models that incorporate conditions without training, our conditional diffusion model is explicitly trained and can fully exploit correlations among texts, keyframes and the diffused target frames. To preserve the control capability of discrete and sparse keyframes, we customize dilated mask attention modules where only partial valid tokens participate in local-to-global attention, indicated by the dilated keyframe mask. Additionally, we develop a simple yet effective smoothness prior, which steers the generated frames towards seamless keyframe transitions at inference. Extensive experiments show that our model not only achieves state-of-the-art performance in terms of semantic fidelity, but more importantly, is able to satisfy animator requirements through fine-grained guidance without tedious labor.

Published

2024-03-24

How to Cite

Wei, D., Sun, X., Sun, H., Hu, S., Li, B., Li, W., & Lu, J. (2024). Enhanced Fine-Grained Motion Diffusion for Text-Driven Human Motion Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5876-5884. https://doi.org/10.1609/aaai.v38i6.28401

Issue

Section

AAAI Technical Track on Computer Vision V