Learning as Performance: Autoencoding and Generating Dance Movements in Real Time

2018 
This paper describes the technology behind a performance where human dancers interact with an “artificial” performer projected on a screen. The system learns movement patterns from the human dancers in real time. It can also generate novel movement sequences that go beyond what it has been taught, thereby serving as a source of inspiration for the human dancers, challenging their habits and normal boundaries and enabling a mutual exchange of movement ideas. It is central to the performance concept that the system’s learning process is perceivable for the audience. To this end, an autoencoder neural network is trained in real time with motion data captured live on stage. As training proceeds, a “pose map” emerges that the system explores in a kind of improvisational state. The paper shows how this method is applied in the performance, and shares observations and lessons made in the process.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []