Hardware Acceleration of Graph Neural Networks

2020 
Graph neural networks (GNNs) have been shown to extend the power of machine learning to problems with graph-structured inputs. Recent research has shown that these algorithms can exceed state-of-the-art performance on applications ranging from molecular inference to community detection. We observe that existing execution platforms (including existing machine learning accelerators) are a poor fit for GNNs due to their unique memory access and data movement requirements. We propose, to the best of our knowledge, the first accelerator architecture targeting GNNs. The architecture includes dedicated hardware units to efficiently execute the irregular data movement required for graph computation in GNNs, while also providing high compute throughput required by GNN models. We show that our architecture outperforms existing execution platforms in terms of inference latency on several key GNN benchmarks (e.g., 7.5x higher performance than GPUs and 18x higher performance than CPUs at iso-bandwidth).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    25
    Citations
    NaN
    KQI
    []