language-icon Old Web
English
Sign In

Emotional prosody

Studies have found that some emotions, such as fear, joy and anger, are portrayed at a higher frequency than emotions such as sadness. Decoding emotions in speech includes three (3) stages: determining acoustic features, creating meaningful connections with these features, and processing the acoustic patterns in relation to the connections established. In the processing stage, connections with basic emotional knowledge is stored separately in memory network specific to associations. These associations can be used to form a baseline for emotional expressions encountered in the future. Emotional meanings of speech are implicitly and automatically registered after the circumstances, importance and other surrounding details of an event have been analyzed. On average, listeners are able to perceive intended emotions exhibited to them at a rate significantly better than chance (chance=approximately 10%). However, error rates are also high. This is partly due to the observation that listeners are more accurate at emotional inference from particular voices and perceive some emotions better than others. Vocal expressions of anger and sadness are perceived most easily, fear and happiness are only moderately well-perceived, and disgust has low perceptibility. Language can be split into two components: the verbal and vocal channels. The verbal channel is the semantic content made by the speaker's chosen words. In the verbal channel, the semantic content of the speakers words determines the meaning of the sentence. The way a sentence is spoken however, can change its meaning which is the vocal channel. This channel of language conveys emotions felt by the speaker and gives us as listeners a better idea of the intended meaning. Nuances in this channel are expressed through intonation, intensity, a rhythm which combined for prosody. Usually these channels convey the same emotion, but sometimes they differ. Sarcasm and irony are two forms of humor based on this incongruent style. Neurological processes integrating verbal and vocal (prosodic) components are relatively unclear. However, it is assumed that verbal content and vocal are processed in different hemispheres of the brain. Verbal content composed of syntactic and semantic information is processed the left hemisphere.Syntactic information is processed primarily in the frontal regions and a small part of the temporal lobe of the brain while semantic information is processed primarily in the temporal regions with a smaller part of the frontal lobes incorporated. In contrast, prosody is processed primarily in the same pathway as verbal content, but in the right hemisphere. Neuroimaging studies using functional magnetic resonance imaging (fMRI) machines provide further support for this hemisphere lateralization and temporo-frontal activation. Some studies however show evidence that prosody perception is not exclusively lateralized to the right hemisphere and may be more bilateral. There is some evidence that the basal ganglia may also play an important role in the perception of prosody. Deficits in expressing and understanding prosody, caused by right hemisphere lesions, are known as aprosodias. These can manifest in different forms and in various mental illnesses or diseases. Aprosodia can be caused by stroke and alcohol abuse as well.The types of aprosodia include: motor (the inability to produce vocal inflection, expressive (when brain limitations and not motor functions are the cause of this inability), and receptive (when a person cannot decipher the emotional speech). It has been found that it gets increasingly difficult to recognize vocal expressions of emotion with increasing age. Older adults have slightly more difficulty labeling vocal expressions of emotion, particularly sadness and anger) than young adults but have a much greater difficulty integrating vocal emotions and corresponding facial expressions. A possible explanation for this difficulty is that combining two sources of emotion requires greater activation of emotion areas of the brain, in which adults show decreased volume and activity. Another possible explanation is that hearing loss could have led to a mishearing of vocal expressions. High frequency hearing loss is known to begin occurring around the age of 50, particularly in men.

[ "Perception", "Stimulus (physiology)", "Prosody" ]
Parent Topic
Child Topic
    No Parent Topic