Hierarchical Policy Network with Multi-agent for Knowledge Graph Reasoning Based on Reinforcement Learning.

2021 
Multi-hop reasoning on Knowledge Graphs (KGs) aims at inferring the triplets that not in the KGs to address the KGs incompleteness problem. Reinforcement learning (RL) methods, exploiting an agent that takes incremental steps by sampling a relation and entity (called an action) to extend its path, has yielded superior performance. Existing RL methods, however, cannot gracefully handle the large-scale action space problem in KGs, causing dimensional disaster. Hierarchical reinforcement learning is dedicated to decomposing a complex reinforcement learning problem into several sub-problems and solving them separately, which can achieve better results than directly solving the entire problem. Building on this, in this paper, we propose to divide the action selection process in each step into three stages: 1) selecting a pre-clustered relation cluster, 2) selecting a relation in the chosen relation cluster, and 3) selecting the tail entity of the relation selected by the previous stage. Each stage has an agent to determine the selection, which formulated a hierarchical policy network. Furthermore, for the environment representation of KGs, the existing methods simply concatenate the different parts (the embedding of start entity, current entity and query relation), which ignore the potential connections between different parts, so we propose a convolutional neural network structure based on inception network to better extract features of the environment and enhance the interaction across different parts of the environment. The experimental results on three datasets demonstrate the effectiveness of our proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []