Boosting semi-supervised network representation learning with pseudo-multitasking

2021 
Semi-supervised network representation learning is becoming a hotspot in graph mining community, which aims to learn low-dimensional vector representations of vertices using partial label information. In particular, graph neural networks integrate structural information and other side information like vertex attributes to learn node representations. Although the existing semi-supervised graph learning performs well on limited labeled data, it is still often hampered when labeled dataset is quite small. To mitigate this issue, we propose PMNRL, a pseudo-multitask learning framework for semi-supervised network representation learning to boost the expression power of graph networks such as vanilla GCN (Graph Convolutional Networks) and GAT (Graph Attention Networks). In PMNRL, by leveraging the community structures in networks, we create a pseudo task that classifies nodes’ community affiliation, and conduct a joint learning of two tasks (i.e., the original task and the pseudo task). Our proposed scheme can take advantage of the inherent connection between structural proximity and label similarity to improve the performance without the need to resort to more labels. The proposed framework is implemented in two ways: two-stage method and end-to-end method. For two-stage method, communities are first detected and then the community affiliations are used as “labels” along with original labels to train the joint model. In end-to-end method, the unsupervised community learning is combined into the representation learning process by shared layers and task-specific layers, so as to encourage the common features and specific features for different tasks at the same time. The experimental results on three real-world benchmark networks demonstrate the performance improvement of the vanilla models using our framework without any additional labels, especially when there are quite few labels.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []