Fine-Grained Power Modeling of Multicore Processors Using FFNNs.
2020
To minimize power consumption while maximizing performance, today’s multicore processors rely on fine-grained run-time dynamic power information – both in the time domain, e.g. \(\mu s\) to ms, and space domain, e.g. core-level. The state-of-the-art for deriving such power information is mainly based on predetermined power models which use linear modeling techniques to determine the core-performance/core-power relationship. However, with multicore processors becoming ever more complex, linear modeling techniques cannot capture all possible core-performance related power states anymore. Although, artificial neural networks (ANN) have been proposed for coarse-grained power modeling of servers with time resolutions in the range of seconds, no work has yet investigated fine-grained ANN-based power modeling. In this paper, we explore feed-forward neural networks (FFNNs) for core-level power modeling with estimation rates in the range of 10 kHz. To achieve a high estimation accuracy, we determine optimized neural network architectures and train FFNNs on performance counter and power data from a complex-out-of-order processor architecture. We show that, relative power estimation error decreases on average by 7.5% compared to a state-of-the-art linear power modeling approach and decreases by 5.5% compared to a multivariate polynomial regression model. Furthermore, we propose an implementation for run-time inference of the power modeling FFNN and show that the area overhead is negligible.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
23
References
0
Citations
NaN
KQI