Implementation of emotional facial actions for human-like agents in a multi-tasking environment

2009 
Automatic synthesis of combined facial actions,such as facial expressions and movement of lips,eyes and eyelids for human-like agents,is still quite difficult.The current approach calculates combined facial actions of human-like agents using a uniform model based on the anatomical structure of the human face.An algorithm is presented for multi-tasking environments that combines Sloman's cogAff and Damasio's Somatic marker mechanisms.Then,priorities are defined for various facial actions with a method to resolve conflicts between different facial actions.An emotion model for human-like agents is then given which automatically generates combined facial actions and overcomes the conflicts between the various facial actions for personalized emotion-based expressions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []