Multimodal Differentiation of Obstacles in Repeated Adaptive Human-Computer Interactions

2021 
Human Computer Interaction can be impeded by various interaction obstacles, impacting a user’s perception or cognition. In this work, we detect and discriminate such interaction obstacles from different data modalities to compensate for them through User Interface (UI) adaptation. For example, we detect memory-based obstacles from brain activity and compensate through repetition of information in the UI; we detect visual obstacles from user behavior and compensate by complementing visual with auditory information in the UI. Online cognitive adaptive systems should be able to decide the most suitable UI adaptation given inputs from several obstacles detectors. In this paper, we employ a Bayesian fusion approach upon different underlying obstacles detectors over multiple consecutive interaction sessions. Experimental results show that the model promisingly outperforms the baseline in the first interaction with an average accuracy of 72.5% and further improves drastically in subsequent interactions with additional information, with an average accuracy of 98%.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    0
    Citations
    NaN
    KQI
    []