Text-Enhanced Knowledge Representation Learning Based on Gated Convolutional Networks

2019 
Knowledge representation learning (KRL), which transforms both the entities and relations into continuous low dimensional continuous vector space, has attracted considerable research. Most of existing knowledge graph (KG) completion models only considers the structural representation of triples, but do not consider the important text information about entity descriptions in the knowledge base. We propose a text-enhanced KG model based on gated convolution network (GConvTE), which can learn entity descriptions and symbol triples jointly by feature fusion. Specifically, each triple (head entity, relation, tail entity) is represented as a 3-column structural embedding matrix, a 3-column textual embedding matrix and a 3-column joint embedding matrix where each column vector represents a triple element. Textual embeddings are obtained by bidirectional gated recurrent unit with attention (A-BGRU) encoding entity descriptions and joint embeddings are obtained by the combination of textual embeddings and structural embeddings. Extending feature dimension in embedding layer, these three matrixs are concatenated into 3-channel feature block to be fed into convolution layer, where the gated unit is added to selectively output the joint features maps. These feature maps are concatenated and then multiplied with a weight vector via a dot product to return a score. The experimental results show that our model GConvTE achieves better link performance than previous state-of-art embedding models on two benchmark datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []