Context-Aware Network Embedding via Variation Autoencoders for Link Prediction

2018 
Networks Embedding (NE) plays a very important role in network analysis in the era of big data. Most of the current Network Representation Learning (NRL) models only consider the structure information, and have static embeddings. However, the identical vertex can exhibit different characters when interacting with different vertices. In this paper, we propose a context-aware text-embedding model which seamlessly integrates the structure information and the text information of the vertex. We employ the Variational AutoEncoder (VAE) to statically obtain the textual information of each vertex and use mutual attention mechanism to dynamically assign the embeddings to a vertex according to different neighbors it interacts with. Comprehensive experiments were conducted on two publicly available link prediction datasets. Experimental results demonstrate that our model performs superior compared to baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []