Leveraging behavioral models of sounding objects forgesture-controlled sound design

2011 
Sound designers and Foley artists have long struggled to create expressive soundscapes using standard editing software, devoting much time for the calibration of multiple sound samples and parameter adjustments. We present an intuitive approach that exploits the capabilities of off-theshelf motion-sensing input devices to enable quick and fluid interaction with sound to trigger and modulate digital sound generators based on adaptable behavioral models of familiar physical sounding objects. Rather than requiring profound technical knowledge of sound design, the system leverages the user's motor memory and motion skills to mimic generic and familiar interactions with everyday sounding objects. This allows the user to fully focus on the expressive act of sound creation while enjoying a fluent workflow and a satisfying user experience.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []