Generating Multimedia Storyline for Effective Disaster Information Awareness

2019 
Storyline generation has emerged to be an effective method to describe the evolution of disaster. However, due to the temporal–spatial, heterogeneous, and information overload, most of the existing storylines are only based on textual data and deliver limited information. In this paper, we introduce a novel framework for generating multimedia storylines to provide more concise and vivid information and deeper understanding of real-time events. We first adopt generative adversarial networks to implement an unsupervised bilingual document summarizing model. Then, we transform image and text incorporation problem into a multi-label learning problem and use convolutional neural networks to train a classification model. And finally, the bilingual documents and images are jointly summarized and embedded into a two-layer storyline generating framework. The experiments on real Hurricane data sets demonstrate the effectiveness of the proposed methods in each level and the overall framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []