Adaptable Multimodal Interaction Framework for Robot-Assisted Cognitive Training

2018 
The size of the population with cognitive impairment is increasing worldwide, and socially assistive robotics offers a solution to the growing demand for professional carers. Adaptation to users generates more natural, human-like behavior that may be crucial for a wider robot acceptance. The focus of this work is on robot-assisted cognitive training of the patients that suffer from mild cognitive impairment (MCI) or Alzheimer. We propose a framework that adjusts the level of robot assistance and the way the robot actions are executed, according to the user input. The actions can be performed using any of the following modalities: speech, gesture, and display, or their combination. The choice of modalities depends on the availability of the required resources. The memory state of the user was implemented as a Hidden Markov Model, and it was used to determine the level of robot assistance. A pilot user study was performed to evaluate the effects of the proposed framework on the quality of interaction with the robot.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    3
    Citations
    NaN
    KQI
    []