Architecture-Level Energy Estimation for Heterogeneous Computing Systems

2021 
Due to the data and computation intensive nature of many popular data processing applications, e.g., deep neural networks (DNNs), a variety of accelerators have been proposed to improve performance and energy efficiency. As a result, computing systems have become increasingly heterogeneous, with application-specific processing offloaded from the CPU to specialized accelerators. To understand the energy efficiency of such systems, it is desirable to characterize holistically the energy consumption of the CPU, the accelerator, and the data transfers in between. We present a modularized architecture-level energy estimation framework that captures the energy breakdown across the various CPU and accelerator components with a unified energy estimation back-end that allows easy integration of accelerator modeling frameworks for emerging designs. Using DNN workloads as examples, we show that CPU-end preprocessing and data transfers to and from the accelerator can account for up to 45-50% of total energy when assessing the system as a whole. Related open-source code is available at https://accelergy.mit.edu.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []