I see it in your eyes: Training the shallowest-possible CNN to recognise emotions and pain from muted web-assisted in-the-wild video-chats in real-time

2020 
Abstract A robust value- and time-continuous emotion recognition has enormous potential benefits within healthcare. For example, within mental health, a real-time patient monitoring system capable of accurately inferring a patient’s emotional state could help doctors make an appropriate diagnosis and treatment plan. Such interventions could be vital in terms of ensuring a higher quality of life for the patient involved. To make such tools a reality, the associated machine learning systems need to be fast, robust and generalisable. In this regard, we present herein, a novel emotion recognition system consisting of the shallowest realisable Convolutional Neural Network (CNN) architecture. We draw insights from visualisations of the trained filter weights and the facial action unit (FAU) activations, i. e. the inputs to the model, of the participants featured in the in-the-wild, spontaneous video-chat sessions of the SEWA corpus. Further, we demonstrate the generalisablity of this approach on the German, Hungarian, and Chinese cultures available in this corpus. The obtained cross-cultural performance is a testimony to the universality of FAUs in expression and understanding of the human affective behaviours. These learnings were moderately consistent with the human perception of emotional expression. The practicality of the proposed approach is also demonstrated in another key healthcare applications; pain intensity prediction. Key results from these experiments highlight the transparency of the shallow CNN structure. As FAU can be extracted in near real-time, and because the models we developed are exceptionally shallow, this study paves the way for a robust, cross-cultural, end-to-end, in-the-wild, explainable real-time affect and pain prediction, that is value- and time-continuous.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    1
    Citations
    NaN
    KQI
    []