Human Attributes Prediction under Privacy-preserving Conditions

2021 
Human attributes prediction in visual media is a well-researched topic with a major focus on human faces. However, face images are often of high privacy concern as they can reveal an individual's identity. How to balance this trade-off between privacy and utility is a key problem among researchers and practitioners. In this study, we make one of the first attempts to investigate the human attributes (emotion, age, and gender) prediction under the different de-identification (eyes, lower-face, face, and head obfuscation) privacy scenarios. We first constructed the Diversity in People and Context Dataset (DPaC). We then performed a human study with eye-tracking on how humans recognize facial attributes without the presence of face and context. Results show that in an image, situational context is informative of a target's attributes. Motivated by our human study, we proposed a multi-tasking deep learning model - Context-Guided Human Attributes Prediction (CHAPNet), for human attributes prediction under privacy-preserving conditions. Extensive experiments on DPaC and three commonly used benchmark datasets demonstrate the superiority of CHAPNet in leveraging the situational context for a better interpretation of a target's attributes without the full presence of the target's face. Our research demonstrates the feasibility of visual analytics under de-identification for privacy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    0
    Citations
    NaN
    KQI
    []