Mitigating the Bias of Heterogeneous Human Behavior in Affective Computing
2021
Affective computing is broadly applied to decision making systems ranging from mental health assessment to employability evaluation. The heterogeneity of human behavioral data poses challenges for both model validity and fairness. The limited access to sensitive attributes (e,g., race, gender) in real-world settings makes it more difficult to mitigate the unfairness of the model outcomes. In this work, we focus on the heterogeneity of human behavioral signals and analyze its impact on model fairness. We design a novel method named multi-layer factor analysis to automatically identify the heterogeneity patterns in high-dimensional behavioral data and propose a framework to enhance fairness of behavioral modeling without accessing sensitive attributes.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
28
References
0
Citations
NaN
KQI