Physics-based Concatenative Sound Synthesis of Photogrammetric models for Aural and Haptic Feedback in Virtual Environments
2020
We present a novel physics-based concatenative sound synthesis (CSS) methodology for congruent interactions across physical, graphical, aural and haptic modalities in Virtual Environments. Navigation in aural and haptic corpora of annotated audio units is driven by user interactions with highly realistic photogrammetric based models in a game engine, where automated and interactive positional, physics and graphics data are supported. From a technical perspective, the current contribution expands existing CSS frameworks in avoiding mapping or mining the annotation data to real-time performance attributes, while guaranteeing degrees of novelty and variation for the same gesture.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
8
References
1
Citations
NaN
KQI