Entropy-aware self-training for graph convolutional networks
2021
Abstract Recently, graph convolutional networks (GCNs) have achieved significant success in many graph-based learning tasks, especially for node classification, due to its excellent ability in representation learning. Nevertheless, it remains challenging for GCN models to obtain satisfying predictions on graphs where only few nodes are with known labels. In this paper, we propose a novel entropy-aware self-training algorithm to boost semi-supervised node classification on graphs with little supervised information. Firstly, an entropy-aggregation layer is developed to strengthen the reasoning ability of GCN models. To the best of our knowledge, this is the first work to combine the entropy-based random walk theory with GCN design. Furthermore, we propose an ingenious checking part to add new nodes as supervision after each training round to enhance node prediction. In particular, the checking part is designed based on aggregated features, which is demonstrated more effective than previous methods and boosts node classification significantly. The proposed algorithm is validated on six public benchmarks in comparison with several state-of-the-art baseline algorithms, and the results illustrate its excellent performance.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
50
References
1
Citations
NaN
KQI