Coordinated Management of Processor Configuration and Cache Partitioning to Optimize Energy under QoS Constraints

2020 
An effective way to improve energy efficiency is to throttle hardware resources to meet a certain QoS target, specified as a performance constraint, associated with all applications running on a multicore system. Prior art has proposed resource management (RM) frameworks in which the share of the last-level cache (LLC) assigned to each processor core and the voltage-frequency (VF) setting for each core is managed in a coordinated fashion to reduce energy. A drawback of such a scheme is that, while one core gives up LLC resources for another core, the performance drop must be compensated by a higher VF setting which leads to a quadratic increase in energy consumption. By allowing each core to be adapted to exploit instruction and memory-level parallelism (ILP/MLP), substantially higher energy savings are enabled.This paper proposes a coordinated RM for LLC partitioning, processor adaptation, and per-core VF scaling. A first contribution is a systematic study of the resource trade-offs enabled when trading between the three classes of resources in a coordinated fashion. A second contribution is a new RM framework that utilizes these trade-offs to save more energy. Finally, a challenge to accurately model the impact of resource throttling on performance is to predict the amount of MLP with high accuracy. To this end, the paper contributes with a mechanism that estimates the effect of MLP over different processor configurations and LLC allocations. Overall, we show that up to 18% of energy, and on average 10%, can be saved using the proposed scheme.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    1
    Citations
    NaN
    KQI
    []