Modeling Multimodal-Multiuser Interactions in Declarative Multimedia Languages

2019 
Recent advances in hardware and software technologies have given rise to a new class of human-computer interfaces that both explores multiple modalities and allows for multiple collaborating users. When compared to the development of traditional single-user WIMP (windows, icons, menus, pointer)-based applications, however, applications supporting the seamless integration of multimodal-multiuser interactions bring new specification and runtime requirements. With the aim of assisting the specification of multimedia applications that integrate multimodal-multiuser interactions, this paper: (1) proposes the MMAM (Multimodal-Multiuser Authoring Model); (2) presents three different instantiations of it (in NCL, HTML, and a block-based syntax); and (3) evaluates the proposed model through a task-based user study. MMAM enables programmers to design and ponder different solutions for applications with multimodal-multiuser requirements. The proposed instantiations served as proofs of concept about the feasibility of our model implementation and provided the basis for practical experimentation, while the performed user study focused on capturing evidence of both the user understanding and the user acceptance of the proposed model. We asked developers to perform tasks using MMAM and then answer a TAM (Technology Acceptance Model)-based questionnaire focused on both the model and its instances. As results, the study indicates that the participants easily understood the model (most of them performed the required tasks with minor or no errors) and found it both useful and easy to use. 94.47% of the participants gave positive answers to the block-based representation TAM questions, whereas 75.17% gave positive answers to the instances-related questions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    1
    Citations
    NaN
    KQI
    []