Conciseness is better: Recurrent attention LSTM model for document-level sentiment analysis
2021
Abstract Long short-term memory (LSTM) or gated recurrent units (GRUs) are usually employed to recurrently learn variable-length sentence representations with long-range dependency in document-level sentiment analysis. However, LSTM and GRUs are biased models in which the words in the tail of sentences are dominant over the words at the beginning. Although some reweighting methods, such as self-attention, are useful for reweighting each word according to its importance, when processing a document with many tokens, they tend to assign equally smaller weights to each word, leading to the keywords being covered by nonsentiment words. In this paper, a recurrent attention LSTM neural network is presented to iteratively locate an attention region covering the key sentiment words. By gradually reducing the range of attention and the number of tokens, the model can leverage the weight of the key sentiment words for final classification. Additionally, a joint loss function is implemented to highlight both keywords and appropriate attention regions. The comparative experiments are conducted on the IMDB, Yelp and Amazon document-level corpora. The results show that the proposed model outperforms several state-of-the-art methods in document-level sentiment classification.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
58
References
0
Citations
NaN
KQI