Emotion Classification or face identification depend on which part of the face is analyzed

2007 
Gosselin and Schyns (2001) have demonstrated that two distinct categorizations of the same faces require different visual information: the mouth is the only diagnostic region for the expression whereas the eyes and the center of the mouth are needed to recognize the gender. Using images from their database (five men and five women with three different emotions), we propose a model of the human visual system (HVS) dedicated to face analysis. Our HVS model is divided into two parts: a retina model that enhances the structure and texture data (as a result, video data are well-conditioned), and a cortical model (V1) that extracts the description of the orientations and frequency bands of the visual stimuli. This model confirms the behavioural results of Gosselin and Schyns and in addition, shows that the upper part of faces contains the identity (and not only the gender) of a person (more than 80% of correct identification) whereas only the lower part is needed to classify emotions (angry, happy or neutral, with more than 85% of correct classification). More testing experiments are carrying out to test our model on larger databases
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []