Corrigendum to “Change in 1-year hospitalization of overall and older patients with major depressive disorder after second-generation antipsychotics augmentation treatment” [J. Affect. Disord. 230(1) (2018) 118–124]
0
Citation
0
Reference
10
Related Paper
Keywords:
Affect
Summary Over time, autobiographical memories demonstrate fixed affect (i.e., maintain initial affect), fading affect, flourishing affect (i.e., increased affect), or flexible affect (i.e., change from unpleasant to pleasant or vice versa). Walker and Skowronski argued that events low in initial pleasantness are more likely to exhibit flourishing affect and flexible affect and that unpleasant events are more likely to demonstrate flexible affect than pleasant events. However, because of the low frequency of flourishing affect and flexible affect events in individual studies, research had not examined differences between these event categories. The present study examined initial pleasantness ratings for fading affect, fixed affect, flourishing affect, and flexible affect events across four published studies. We expected and generally found lower initial pleasantness for flourishing affect and flexible affect events than for fading affect and fixed affect events. Implications are discussed. Copyright © 2016 John Wiley & Sons, Ltd.
Affect
Flourishing
Cite
Citations (7)
Affect
Cite
Citations (6)
Background: One of every three women between 18 and 24 years of age may be significantly depressed. Younger women have shown increasing rates of unipolar depression since the 1950s, and the average age of onset continues to decline. Objectives: To examine the prevalence and correlates of high depressive symptoms in single college women 18 to 24 years of age. Negative thinking was posited to mediate the relationship between self-esteem and depressive symptoms. Methods: A sample of 246 women was recruited from a university student body. Each woman completed a survey that included the Center for Epidemiologic Studies-Depression Scale, the Beck Depression Inventory, the Rosenberg Self-esteem Scale, the Crandell Cognitions Inventory, and the Automatic Thoughts Questionnaire. Results: Of the women, 35% had high depressive symptoms. Negative thinking mediated the relationship between self-esteem and depressive symptoms. However, self-esteem also showed a weak direct effect on depressive symptoms. Conclusion: The findings suggest that negative thinking may play an important role in the development of depressive symptoms in college women.
Depression
Cite
Citations (48)
Affect
Cite
Citations (29)
The automatic analysis of affect is a relatively new and challenging multidisciplinary research area that has gained a lot of interest over the past few years. The research and development of affect recognition systems has opened many opportunities for improving the interaction between man and machine. Although affect can be expressed through multimodal means like hand gestures, facial expressions, and body postures, this dissertation has focused on speech (i.e., vocal expressions) as the main carrier of affect. Speech carries a lot of ‘hidden’ information. By hearing a voice only, humans can guess who is speaking, what language he/she is speaking (or accent or dialect), what age he/she is etc. The goal of automatic speech recognition (ASR) is to recognize what is said. In automatic speech-based emotion recognition, the goal is to recognize how something is said. In this work, several experiments are described which were carried out to investigate how affect can be automatically recognized in speech. One of the first steps in developing speech-based affect recognizers involves finding a spontaneous speech corpus that is labeled with emotions. Machine learning techniques, that are often used to build these recognizers, require these data to learn how to associate specific speech features (e.g., pitch, energy) with certain emotions. However, collecting and labeling real affective speech data has appeared to be difficult. Efforts to collecting affective speech data in the field have been described in this work. As an alternative, speech corpora that contain acted emotional speech (actors are asked to portray certain emotions) have often been used. Advantages of these corpora are that the recording conditions can be controlled, the emotions portrayed can be clearly associated with an emotion label, the costs and effort required to collect such corpora are relatively low, and the recordings are usually made available to the research community. In this work, an acted emotional speech corpus (containing basic, universal emotions like Anger, Boredom, Disgust, Fear, appiness, Neutral, and Sadness) was used to explore and apply recognition techniques and evaluation frameworks, adopted from similar research areas like automatic speaker and language recognition, to automatic emotion recognition. Recognizers were evaluated in a detection framework, and an evaluation for handling so-called ‘out-of-set’ emotions (unknown emotions that were not present in the training data, but which can occur in real-life situations) was presented. Partly due to lack of standardization and shared databases, the evaluation of affect recognizers remains somewhat problematic. While evaluation is an important aspect in development, it has been a relatively underexposed topic of investigation in the emotion research community. The main objections against the use of acted emotional speech corpora are that the expressions are not ‘real’ but rather portrayals of prototype emotions (and hence, expressed rather exaggeratedly), and the emotions portrayed do not often occur in real life situations. Therefore, in this work, spontaneous data has also been used and methods were developed to recognize spontaneous, vocal expressions of affect, like laughter. The task of the laughter detector was to recognize audible laughter in meeting speech data. Using a combination of Gaussian Mixture Models (GMMs)and Support Vector Machines (SVMs), and a combination of prosodic and spectral speech features, relatively low error rates between 3%–12% were achieved. Although the detector did not interpret the affective meaning of the laughter, the detection of laughter alone was informative enough. Part of these findings were used to build a so-called ‘Affective Mirror’ that successfully elicited and recognized laughter with different user groups. Other speech phenomena related to vocal expressions of affect, also in the context of meeting speech data, are the expressions of opinions and sentiments. In this work, it was assumed that opinions are expressed differently from factual statements in terms of tone of voice, and the words used. Classification experiments were carried out to find the best combination of lexical and prosodic features for the discrimination between subjective and non-subjective clauses. As lexical features, word-level, phone-level, and character-level n-grams were used. The experiments showed that a combination of all features yields the best performances, and that the prosodic features were the weakest of all features investigated. In addition, a second task was formulated, namely the discrimination between positive subjective clauses and negative subjective clauses. Similar results for this task were found. The relatively high error rates for both tasks, Cdet = 26%–30%, indicat that these are more difficult recognition problems than that of laughter: the relation between prosodic and lexical features, and subjectivity and polarity (i.e., positive vs. negative), is not as clear as is in the case of laughter. As an intermediate between real affective expressions and acted expressions, elicited affective expressions were employed in this dissertation in several human perception and classification experiments. To this end, a multimodal corpus with elicited affect was recorded. Affective vocal and facial expressions were elicited via a multiplayer first-person shooter video game (Unreal Tournament) that was manipulated by the experimenter. These expressions were captured by close-talk microphones and high-quality webcams, and were afterwards rated by the players themselves on Arousal (active-passive) and Valence (positive-negative) scales. After post-processing the data, perception and classification experiments were carried out on this data. The first experiment carried out with this unique kind of data tried to answer the question how the level of agreement between observers on the perceived emotion is affected when audio-only, video-only, audiovisual, or audiovisual + context information clips containing affective expressions are shown. The observers were asked to rate each clip on Arousal and Valence scales. The results showed that the agreement among human observers was highest when audiovisual clips were shown. Furthermore, the observers reached higher agreement on Valence judgments than Arousal judgments. Additionally, the results indicated that the ‘self’-ratings of the gamers themselves differed somewhat from the ‘observed’-ratings of the human observers. This finding was further investigated in a second xperiment. Six raters re-annotated a substantial part of the corpus. The results confirmed that there is a discrepancy between what the ‘self’-raters (i.e., the gamers themselves) experienced/felt and what observers perceive based on the gamers’ vocal and facial expressions. This finding has consequences for the development of automatic affect analyzers that use these ratings: the goal of affect analyzers can be to recognize ‘felt’ affect, or to recognize ‘observed/perceived’ affect. Two different types of speech-based affect recognizers were developed in parallel to recognize either ‘felt’ or ‘perceived’ affect on continuous Arousal and Valence scales. The results showed that ‘felt’ emotions are much harder to predict than ‘perceived’ emotions. Although these recognizers performed moderately from a classification perspective, the recognizers did not perform too bad in comparison to human performance. The recognizers developed depend much on how the affect data is rated by humans; if this data reflects moderate human judgments of affect, then it can be difficult for the machine to perform well (in an absolute sense). The work presented in this dissertation shows that the automatic recognition of affect in speech is complicated by the fact that real affect, as encountered in reallife situations, is a very complex phenomenon that sometimes cannot be described straightforwardly in ways that can be useful for computer scientists (who would like to build affect recognizers). The use of real affect data has led to the development of recognizers that are more targeted toward affect-related expressions. Laughter and subjectivity are examples of such affect-related expressions. The Arousal and Valence descriptors offer a nice way to describe the meaning of these affective expressions. The relatively high error rates obtained for Arousal and Valence prediction, suggest that the acoustic correlates used in this research only partly capture the characteristics of real affective speech. The search for stronger acoustic correlates or vocal profiles for specific emotions continues. This search is partly complicated by the ‘noise’ that comes with real affect which remains a challenge for the research community working toward automatic affect analyzers.
Affect
Stress
Affective Computing
Cite
Citations (25)
Affect
Cite
Citations (4)
Affect
Tone (literature)
Cite
Citations (27)
The present chapter analyzes relations between affect and health behaviors from the perspective of the action control model of affect regulation. It presents evidence that forming if-then plans or implementation intentions can emancipate health actions from unwanted influence by three kinds of affect—experienced affect, anticipated affect, and implicit affect. For each of these kinds of affect, it demonstrates that emancipation can be achieved in two ways—either by directly targeting the affect itself so as to undermine the strength of the affective response, or by targeting the relationship between affect and health behavior so that the translation of affect into action is reduced or blocked. It concludes that the impact of affect on health decisions and actions is not inevitable: affective influence can be modulated effectively using if-then plans.
Affect
Affect regulation
Cite
Citations (6)
Research has demonstrated that two types of affect have an influence on judgment and decision making: incidental affect (affect unrelated to a judgment or decision such as a mood) and integral affect (affect that is part of the perceiver's internal representation of the option or target under consideration). So far, these two lines of research have seldom crossed so that knowledge concerning their combined effects is largely missing. To fill this gap, the present review highlights differences and similarities between integral and incidental affect. Further, common and unique mechanisms that enable these two types of affect to influence judgment and choices are identified. Finally, some basic principles for affect integration when the two sources co-occur are outlined. These principles are discussed in relation to existing work that has focused on incidental or integral affect but not both.
Affect
Cite
Citations (108)
Changes in students’ depressive symptoms during the course of treatment at college counseling centers were examined by sexual orientation. In Study 1, results showed that depressive symptoms decreased similarly across sexual orientation groups during the course of treatment. In Study 2, family support did not moderate the relationship between pre‐ and posttreatment depressive symptoms but had a direct effect on posttreatment depressive symptoms for students questioning their sexual identity. Clinical implications are discussed.
Sexual identity
Depression
Cite
Citations (8)