Population-specific Detection of Couples' Interpersonal Conflict using Multi-task Learning

2018 
The inherent diversity of human behavior limits the capabilities of general large-scale machine learning systems, that usually require ample amounts of data to provide robust descriptors of the outcomes of interest. Motivated by this challenge, personalized and population-specific models comprise a promising line of work for representing human behavior, since they can make decisions for clusters of people with common characteristics, reducing the amount of data needed for training. We propose a multi-task learning (MTL) framework for developing population-specific models of interpersonal conflict between couples using ambulatory sensor and mobile data from real-life interactions. The criteria for population clustering include global indices related to couples' relationship quality and attachment style, person-specific factors of partners' positivity, negativity, and stress levels, as well as fluctuating factors of daily emotional arousal obtained from acoustic and physiological indices. Population-specific information is incorporated through a MTL feed-forward neural network (FF-NN), whose first layers capture the common information across all data samples, while its last layers are specific to the unique characteristics of each population. Our results indicate that the proposed MTL FF-NN trained solely on the sensor-based acoustic, linguistic, and physiological modalities provides unweighted and weighted F1-scores of 0.51 and 0.75, respectively, outperforming the corresponding baselines of a single general FF-NN trained on the entire dataset and separate FF-NNs trained on each population cluster individually. These demonstrate the feasibility of such ambulatory systems for detecting real-life behaviors and possibly intervening upon them, and highlights the importance of taking into account the inherent diversity of different populations from the general pool of data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    4
    Citations
    NaN
    KQI
    []