Automated Translation of Human Postures from Kinect Data to Labanotation

2017 
We present a non-intrusive automated system to translate human postures into Labanotation, a graphical notation for human postures and movements. The system uses Kinect to capture the human postures, identifies the positions and formations of the four major limbs: two hands and two legs, converts to the vocabulary of Labanotation and finally translates to a parseable LabanXML representation. We use the skeleton stream to classify the formations of the limbs using multi-class support vector machines. Encoding to XML is performed based on Labanotation specification. A data set of postures is created and annotated for training the classifier and to test its performance. We achieve 80% to 90% accuracy for the 4 limbs. The system can be used as an effective front-end for posture analysis applications in various areas like dance and sports where predefined postures form the basis for analysis and interpretation. The parseability of XML makes it easy for integration in a platform independent manner.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    2
    Citations
    NaN
    KQI
    []