GREEN Cache: Exploiting the Disciplined Memory Model of OpenCL on GPUs

2015 
As various graphics processing unit architectures are deployed across broad computing spectrum from a hand-held or embedded device to a high-performance computing server, OpenCL becomes the de facto standard programming environment for general-purpose computing on graphics processing units. Unlike its CPU counterpart, OpenCL has several distinct features such as its disciplined memory model, which is partially inherited from conventional 3D graphics programming models. On the other hand, due to ever increasing memory bandwidth pressure and low power requirement, the capacity of on-chip caches in GPUs keeps increasing over time. Given such trends, we believe that we have interesting programming model/architecture co-optimization opportunities, in particular, how to energy-efficiently utilize large on-chip caches for GPUs. In this paper, as a showcase, we study the characteristics of the OpenCL memory model and propose a technique called GPU Region-aware energy-efficient non-inclusive cache hierarchy, or GREEN cache hierarchy. With the GREEN cache, our simulation results show that we can save 56 percent of dynamic energy in the L1 cache, 39 percent of dynamic energy in the L2 cache, and 50 percent of leakage energy in the L2 cache with practically no performance degradation and off-chip access increases.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    8
    Citations
    NaN
    KQI
    []