Representation learning with complete semantic description of knowledge graphs

2017 
Representation Learning (RL) of knowledge graphs aims to project both entities and relations into a continuous low dimensional space. There exits two kinds of representation methods for entities in Knowledge Graphs (KGs), including structure-based representation and description-based representation. Most methods represent entities with fact triples of KGs through translating embedding models, which can't integrate the rich information in entities descriptions with triple structure information. In this paper, we propose a novel RL method named as Representation Learning with Complete semantic Description of Knowledge Graphs (RLCD), which can exploit all semantic information of entity descriptions and fact triples of KGs, to enrich the semantic representations of KGs. More specifically, we explore Doc2Vec encoder model to encode all semantic information of entity descriptions without losing the relevance in the context of entities descriptions, and further learn knowledge representations from triples with entity descriptions. The experiment results show that RLCD gets better performance that state-of-the-art method DKRL, in terms of mean rank value and HITS. Moreover, RLCD is much faster than DKRL.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []