Providing Green Services in HPC Data Centers: A Methodology Based on Energy Estimation

2015 
A supercomputer is an infrastructure built from an interconnection of computers capable of performing tasks in parallel in order to achieve very high performance. They are used in order to run scientific applications in various fields like the prediction of severe weather phenomena and seismic waves. To meet new scientific challenges, the HPC community has set a new performance objective for the end of the decade: Exascale. To achieve such performance (1018 FLoat Operations Per Second), an exascale supercomputer will gather several millions of CPU cores running up to a billion trends and will consume several megawatts. The energy consumption issue at the exascale becomes even more worrying when we know that we already reach energy consumptions higher than 17 MW at the petascale while the DARPA set to 20 MW the threshold for exascale supercomputers. Hence, these systems that will be 30 times more performant than the current systems have to achieve an energy efficiency of 50 gigaFLOPS per watt while the current ones achieve between 2 and 3 gigaFLOPS per watt. As a consequence, reducing the energy consumption of high-performance computing infrastructures is a major challenge for the next years in order to be able to move to the exascale era.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    2
    Citations
    NaN
    KQI
    []