Fundamental Limits of Deep Graph Convolutional Networks for Graph Classification
2022
Graph convolutional networks (GCNs) are a widely used method for graph representation learning. To elucidate their capabilities and limitations for graph classification, we investigate their power to generate well-separated embedding vectors for graphs sampled from different random graph models, which correspond to different class-conditional distributions in a classification problem. It has been recognized that metric properties of learned representations are important for reduction of complexity of classifiers trained on them. Additionally, we show that inability to generate well-separated embedding vectors for two different graph models implies information-theoretic indistinguishability of these models based on noise-perturbed embedding vectors of sample graphs. We consider graph models arising from graphons, which parametrize all infinite exchangeable graph models. We precisely characterize, in terms of degree profile closeness, the set of graphon pairs that are indistinguishable (in metric and information-theoretic senses) by a GCN with depth at least logarithmic in sample graph size. Outside this set, a very simple architecture suffices for distinguishability. We then exhibit a concrete, infinite set of graphon pairs that are well-separated in cut distance and are indistinguishable by a GCN. These results theoretically match empirical observations of several prior works. Finally, we give empirical results on synthetic and real graph classification datasets, giving some indication that degree profile closeness gives rise to indistinguishability of graph distributions in real datasets, even beyond our theoretical framework.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
41
References
0
Citations
NaN
KQI