An Adaptive Sentence Representation Learning Model Based on Multi-gram CNN

2017 
Nature Language Processing has been paid more attention recently. Traditional approaches for language model primarily rely on elaborately designed features and complicated natural language processing tools, which take a large amount of human effort and are prone to error propagation and data sparse problem. Deep neural network method has been shown to be able to learn implicit semantics of text without extra knowledge. To better learn deep underlying semantics of sentences, most deepneuralnetworklanguagemodelsutilizemulti-gramstrategy. However, the current multi-gram strategies in CNN framework are mostly realized by concatenating trained multi-gram vectors to form the sentence vector, which can increase the number of parameters to be learned and is prone to over fitting. To alleviate the problem mentioned above, we propose a novel adaptive sentence representation learning model based on multigram CNN framework. It learns adaptive importance weights of different n-gram features and forms sentence representation by using weighted sum operation on extracted n-gram features, which can largely reduce parameters to be learned and alleviate the threat of over fitting. Experimental results show that the proposed method can improve performances when be used in sentiment and relation classification tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []