CrisisBERT: A Robust Transformer for Crisis Classification and Contextual Crisis Embedding

2021 
Detecting crisis events accurately is an important task, as it allows the relevant authorities to implement necessary actions to mitigate damages. For this purpose, social media serve as a timely information source due to its prevalence and high volume of first-hand accounts. While there are prior works on crises detection, many of them do not perform crisis embedding and classification using state-of-the-art attention-based deep neural networks models, such as Transformers and document-level contextual embeddings. In contrast, we propose CrisisBERT, an end-to-end transformer-based model for two crisis classification tasks, namely crisis detection and crisis recognition, which shows promising results across accuracy and F1 scores. The proposed CrisisBERT model demonstrates superior robustness over various benchmarks, and it includes only marginal performance compromise while extending from 6 to 36 events with a mere 51.4% additional data points. We also propose Crisis2Vec, an attention-based, document-level contextual embedding architecture, for crisis embedding, which achieves better performance than conventional crisis embedding methods such as Word2Vec and GloVe.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []