Vocal aspect of social laughter during virtual interaction

2019 
The current paper focuses on the various types of laughter recorded during real social interactions in a virtual immersive environment. With this experiment, we investigate whether human beings are able to discriminate social from spontaneous laughter on the basis of auditory or audiovisual laughs presented outside any context. Towards this aim, we carried out two perceptual experiments proposing audio alone and audiovisual conditions, which were taken by French and Japanese native subjects. Each subject listened to (or looked at) 162 laughs and chose one response among three possibilities: social, spontaneous or unknown. The results of both experiments show that all participants are able to discriminate these two types of laughter with quite good confidence without contextual information: the correct identification rate for spontaneous laughter is about 70% with a similar amount for social laughter in audio alone and audiovisual conditions. We then extracted acoustic characteristics for each laugh in order to investigate potential differences between the two types of laughter. A multifactorial analysis showed that perceptual behaviors and some acoustic features (F0 and duration) are correlated. Especially, we observe a significant difference between social and spontaneous laughter through the features of total duration and voiced duration. Finally, we conducted a perceptual experiment on the subcategorization of social laughs based on three social factors: speaker’s physical state, speaker’s involvement and psychological distance. The results show that social laughter is characterized by similar contexts for both groups of listeners, except by Japanese subjects who regard the psychological distance between partners as distant.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []