language-icon Old Web
English
Sign In

CUDA-Aware OpenSHMEM

2016 
GPUDirect RDMA (GDR) brings the high-performance communication capabilities of RDMA networks like InfiniBand (IB) to GPUs. It enables IB network adapters to directly write/read data to/from GPU memory. Partitioned Global Address Space (PGAS) programming models, such as OpenSHMEM, provide an attractive approach for developing scientific applications with irregular communication characteristics by providing shared memory address space abstractions, along with one-sided communication semantics. However, current approaches and designs for OpenSHMEM on GPU clusters do not take advantage of the GDR features leading to potential performance improvements being untapped. In this paper, we introduce "CUDA-Aware" concepts for OpenSHMEM that enable operations to be directly performed from/on buffers residing in GPU's memory. We propose novel and efficient designs that ensure "truly one-sided" communication for different intra-/inter-node configurations while working around the hardware limitations. We achieve 2.5 � and 7 � improvement in point-point communication for intra-node and inter-node configurations, respectively. Our proposed framework achieves 2.2 µ s for an intra-node 8-byte put operation from CPU to local GPU and 3.13 µ s for an inter-node 8-byte put operation from GPU to remote GPU. The proposed designs lead to 19% reduction in the execution time of Stencil2D application kernel from the SHOC benchmark suite on Wilkes system which is composed of 64 dual-GPU nodes. Similarly, the evolution time of GPULBM application is reduced by 45% on 64 GPUs. On 8 GPUs per node CS-Storm-based system, we show 50% and 23% improvement on 32 and 64 GPUs, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    5
    Citations
    NaN
    KQI
    []