DanceAnyWay: Synthesizing Mixed-Genre 3D Dance Movements Through Beat Disentanglement

01/30/2023
by   Aneesh Bhattacharya, et al.
0

We present DanceAnyWay, a hierarchical generative adversarial learning method to synthesize mixed-genre dance movements of 3D human characters synchronized with music. Our method learns to disentangle the dance movements at the beat frames from the dance movements at all the remaining frames by operating at two hierarchical levels. At the coarser "beat" level, it encodes the rhythm, pitch, and melody information of the input music via dedicated feature representations only at the beat frames. It leverages them to synthesize the beat poses of the target dance using a sequence-to-sequence learning framework. At the finer "repletion" level, our method encodes similar rhythm, pitch, and melody information from all the frames of the input music via dedicated feature representations and couples them with the synthesized beat poses from the coarser level to synthesize the full target dance sequence using an adversarial learning framework. By disentangling the broader dancing styles at the coarser level from the specific dance movements at the finer level, our method can efficiently synthesize dances composed of arbitrarily mixed genres and styles. We evaluate the performance of our approach through extensive experiments on both the mixed-genre TikTok dance dataset and the single-genre AIST++ dataset and observe improvements of about 2 in motion diversity metrics over the current baselines in the two datasets respectively. We also conducted a user study to evaluate the visual quality of our synthesized dances. We noted that, on average, the samples generated by our method were about 9 five-point Likert-scale score over the best available current baseline in terms of motion quality and diversity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset