Optimizing Word Embedding for Fine-Grained Sentiment Analysis.

2019 
Word embeddings have been extensively used for various Natural Language Processing tasks. However, word vectors trained based on corpus context information fail to distinguish words with the same context but different semantics, which may lead to a similar word vector with opposite semantic terms. This will affect some Natural Language Processing tasks, such as fine-grained sentiment analysis tasks. In this paper, a new word vectors optimization model is proposed. This model can be applied to any pre-trained word vectors. Within a certain range, it can make the opposite semantic words away from each other, and the same semantic words are close to each other. The experimental results show that our model can improve the traditional word embedding in the fine-grained emotional analysis task of Chinese Weibo, and the optimized word vector using our model outperforms the unoptimized word vector.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    1
    Citations
    NaN
    KQI
    []