Mutual teaching for graph convolutional networks

2021 
Abstract Graph convolutional networks generate reasonable predictions of unlabeled samples because of transductive label propagation. Samples can have different predicted confidences, and therefore, we consider high-confidence predictions as pseudo labels to select more samples for updating models. We propose a new training strategy called mutual teaching, wherein dual models are first trained and they then teach each other during each batch process. Each network feeds forward all samples, and the samples with high-confidence predictions are used to expand the label set; then, each model is updated by the selected samples of its peer network. We consider the high-confidence predictions as useful knowledge, and each network teaches its peer network using this knowledge. In the proposed strategy, the pseudo-label set of a network is derived from its peer network, and this strategy helps improve the performance significantly. Experiments are conducted on the three citation network datasets and experimental results demonstrate that our method achieves superior performance over state-of-the-art methods under the condition of very low label rates.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    4
    Citations
    NaN
    KQI
    []