Concordant Contrastive Learning for Semi-supervised Node Classification on Graph
2021
Semi-supervised object classification has been a fundamental problem in relational data modeling recently. The problem has been extensively studied in the literature of graph neural networks (GNNs). Based on the homophily assumption, GNNs smooth the features of the adjacent nodes, resulting in hybrid class distributions in the feature space when the labeled nodes are scarce. Besides, the existing methods inherently suffer from the non-robustness, due to the deterministic propagation. To address the above two limitations, we propose a novel method Concordant Contrastive Learning (CCL) for semi-supervised node classification on graph. Specifically, we generate two group data augmentations by randomly masking node features and separately perform node feature propagation with low- and high-order graph topology information. Further, we design two granularity regularization losses. The coarse-grained regularization loss (i.e., center-level contrastive loss) preserves the identity of each class against the rest, which benefits to guide the discriminative class distributions. The fine-grained regularization loss (i.e., instance-level contrastive loss) enforces consistency between soft assignments for different augmentations of the same node. Extensive experiments on different benchmark datasets imply that CCL significantly outperforms a wide range of state-of-the-art baselines on the task of semi-supervised node classification.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI