Skeleton-Based Labanotation Generation Using Multi-model Aggregation

2020 
Labanotation is a well-known notation system for effective dance recording and archiving. Using computer technology to generate Labanotation automatically is a challenging but meaningful task, while existing methods cannot fully utilize spatial characteristics of human motion and distinguish subtle differences between similar human movements. In this paper, we propose a method based on multi-model aggregation for Labanotation generation. Firstly, two types of feature are extracted, the joint feature and the Lie group feature, which reinforce the representation of human motion data. Secondly, a two-branch network architecture based on Long Short-Term Memory (LSTM) network and LieNet is introduced to conduct effective human movement recognition. LSTM is capable to model long-term dependencies in temporal domain, and LieNet is a powerful network for spatial analysis based on Lie group structure. Within the architecture, the joint feature and the Lie group feature are fed into LSTM model and LieNet model respectively for training. Furthermore, we utilize score fusion methods to fuse the output class scores of the two branches, which performs better than any of the single models, due to complementarity between LSTM and LieNet. In addition, skip connection is applied in the structure of LieNet, which simplifies the training procedure and improves the convergence behavior. Evaluations on standard motion capture dataset demonstrate the effectiveness of proposed method and its superiority compared with previous works.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []