Believable automatically synthesized motion by knowledge-enhanced motion transformation

2000 
Automatic synthesis of character animation promises to put the power of professional animators in the hands of everyday would-be filmmakers, and to enable rich behaviors and personalities in 3D virtual worlds. This goal has many difficult sub-problems, including how to generate primitive motion for any class of action (run, jump, sigh, etc.) satisfying any goals and in any style (e.g. sad, hurried, like George walks, etc.), how to parameterize these actions at a high level while allowing detailed modifications to the result, and how to combine the primitive motions to create coherent sequences and combinations of actions. Previous approaches to automatic motion synthesis generally appeal to some combination of physics simulation and robotic control to generate motion from a high-level description. In addition to being generally difficult to implement, these algorithms are limited to producing styles that can be expressed numerically in terms of physical quantities. In this thesis we develop a new automatic synthesis algorithm based on motion transformation, which produces new motions by combining and/or deforming existing motions. Current motion transformation techniques are low to mid level tools that are limited in the range and/or precision of deformations they can make to a motion or groups of motions. We believe these limitations follow from treating the motions as largely unstructured collections of signals. Consequently, the first contribution of our work is to create a powerful, general motion transformation algorithm that combines the strengths of previous techniques by structuring input motions in a way that allows us to combine the effects of several transformation-techniques. To utilize this algorithm in an automatic setting, we must be able to encapsulate control rules in primitive motion generators. We accomplish this by developing the “motion model,” which contains rules for transforming sets of example motions for a specific class of action. We show that because the example motions already contain detailed information about the action, the rules can be formulated on general properties of the action, such as targets/goals, rather than low-level properties such as muscle activations. This not only makes the rules relatively easy to devise, but allows a single motion model to generate motion in any style for which we can provide a few example motions. In the course of our experimentation we developed fifteen different motion models for humanoid character animation, several of which possess multiple styles (mainly derived from motion captured data). After developing motion models, we continue to utilize knowledge encapsulation to address the problems of combining the output of motion models sequentially (segueing) and simultaneously (layering), collectively known as “transitioning.” Because of the action-specific knowledge we store in motion models, we are able to create much richer and higher quality transitions than past approaches. Our current results enable us to animate coherent stories, iteratively refining out initial directorial-like specification, in near-real-time. Some of our results can be viewed at http://www.cs.cmu.edu/∼spiff/thesis/animations.htm#chapter8.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    65
    References
    17
    Citations
    NaN
    KQI
    []