Multimodal smart interactive presentation system
2013
The authors propose a system that allows presenters to control presentations in a natural way by their body gestures and vocal commands. Thus a presentation no longer follows strictly a rigid sequential structure but can be delivered in various flexible and content adapted scenarios. Our proposed system fuses three interaction modules: gesture recognition with Kinect 3D skeletal data, key concepts detection by context analysis from natural speech, and small-scaled hand gesture recognition with haptic data from smart phone sensors. Each module can process in realtime with the accuracy of 95.0%, 91.2%, and 90.1% respectively. The system uses events generated from the three modules to trigger pre-defined scenarios in a presentation to enhance the exciting experience for audiences.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
22
References
4
Citations
NaN
KQI