@m.vandepanne
  @m.vandepanne
Michiel van de Panne | [SIGGRAPH 2020] Character Controllers using Motion VAEs @m.vandepanne | Uploaded June 2020 | Updated October 2024, 2 hours ago.
Video for the related SIGGRAPH 2020 paper

A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. The latent variables of the learned autoencoder define the action space for the movement and thereby govern its evolution over time. Planning or control algorithms can then use this action space to generate desired motions. In particular, we use deep reinforcement learning to learn controllers that achieve goal-directed movements. We demonstrate the effectiveness of the approach on multiple tasks. We further evaluate system-design choices and describe the current limitations of Motion VAEs.
[SIGGRAPH 2020] Character Controllers using Motion VAEsSIGGRAPH 2017: DeepLoco paper (main video)ALLSTEPS:  Curriculum-driven Learning of Stepping Stone skillsSCA 2020:  ALLSTEPS Curriculum-driven Learning of Stepping Stone Skills (full talk)Dynamic Terrain Traversal Skills Using Reinforcement Learning (part 1)Learning to Locomote: Understanding How Environment Design Matters for Deep Reinforcement LearningFeedback Control for Cassie with Deep Reinforcement LearningProgressive Reinforcement Learning with Distillation for Multi-Skilled Motion ControlDynamic Animation Synthesis with Free-Form DeformationsPartwiseMPC: Interactive Control of Contact-Guided MotionsSIGGRAPH 2017 DeepLoco (supplemental video)2005 sbim sketch3d

[SIGGRAPH 2020] Character Controllers using Motion VAEs @m.vandepanne

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER