Affective social interaction with CuDDler robot

2013 
This paper introduces an implemented affective social robot, called CuDDler. The goal of this research is to explore and demonstrate the utility of a robot that is capable of recognising and responding to a user's emotional acts (i.e., affective stimuli), thereby improving the social interactions. CuDDler uses two main modalities; a) audio (i.e., linguistics and non-linguistics sounds) and b) visual (i.e., facial expressions) to recognise the user's emotional acts. Similarly, CuDDler has two modalities; a) gesture and b) sound to respond or express its emotional responses. During the TechFest 2012 event, CuDDler successfully demonstrated its capability of recognising the user's emotional acts and responding its expression accordingly. Although, CuDDler is still in its early prototyping stage, the preliminary survey results indicate that the CuDDler has potential to not only aid in human-robot interaction but also contribute towards the long term goal of multi-model emotion recognition and socially interactive robot.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    10
    Citations
    NaN
    KQI
    []