Learning Discriminative Neural Representations for Event Detection

2021 
Retrieving event instances from texts is pivotal to various natural language processing applications (e.g., automatic question answering and dialogue systems), and the first task to perform is event detection. There are two related sub-tasks therein-trigger identification and type classification, and the former is considered to play a dominant role. Nevertheless, it is notoriously challenging to predict event triggers right. To handle the task, existing work has made tremendous progress by incorporating manual features, data augmentation and neural networks, etc. Due to the scarcity of data and insufficient representation of trigger words, however, they still fail to precisely determine the spans of triggers (coined as trigger span detection problem). To address the challenge, we propose to learn discriminative neural representations (DNR) from texts. Specifically, our DNR model tackles the trigger span detection problem by exploiting two novel techniques: 1) a contrastive learning strategy, which enlarges the discrepancy between representations of words inside and outside triggers; and 2) a Mixspan strategy, which better trains the model to differentiate words nearby triggers' span boundaries. Extensive experiments on benchmarks-ACE2005 and TAC2015-demonstrate the superiority of our DNR model, leading to state-of-the-art performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    2
    Citations
    NaN
    KQI
    []