Learning a Continuous Control of Motion Style from Natural Examples
2019
The simulation of humanoid avatars is relevant for a multitude of applications, such as movies, games, simulations for autonomous vehicles, virtual avatars and many more. In order to achieve the simulation of realistic and believable characters, it is important to simulate motion with the natural motion style matching the character’s characteristic. A female avatar, for example, should move in a female style and different characters should vary in their expressiveness of this style. However, the manual definition, as well as the acting of a natural female or male style, is non-trivial. Previous work on style transfer is insufficient, as the style examples are not necessarily a natural depiction of female or male locomotion. We propose a novel data-driven method to infer the style information based on individual samples of male and female motion capture data. For this purpose, the data of 12 female and 12 male participants was captured in an experimental setting. A neural network based motion model is trained for each participant and the style dimension is learned in the latent representation of these models. Thus a linear style model is inferred on top of the motion models. It can be utilized to synthesize network models of different style expressiveness on a continuous scale while retaining the performance and content of the original network model. A user study supports the validity of our approach while highlighting issues with simpler approaches to infer the style.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
46
References
0
Citations
NaN
KQI