Towards Exascale: Measuring the Energy Footprint of Astrophysics HPC Simulations

2019 
The increasing amount of data produced in Astronomy by observational studies and the size of theoretical problems to be tackled in the next future pushes the need of HPC (High Performance Computing) resources towards the "Exascale". The HPC sector is undergoing a profound phase of transition, in which one of the toughest challenges to cope with is the energy efficiency that is one of the main blocking factors to the achievement of "Exascale". Since ideal peak-performance is unlikely to be achieved in realistic scenarios, the aim of this work is to give some insights about the energy consumption of contemporary architectures with real scientific applications in a HPC context. We use two state-of-the-art applications from the astrophysical domain, that we optimized in order to fully exploit the underlying hardware: a direct N-body code and a semi-analytical code for Cosmic Structure formation simulations. For these two applications, we quantitatively evaluate the impact of computation on the energy consumption when running on three different systems: one that represents the present of current HPC systems (an Intel-based cluster), one that (possibly) represents the future of HPC systems (a prototype of an Exascale supercomputer) and a micro-cluster based on Arm MPSoC. We provide a comparison of the time-to-solution, energy-to-solution and energy delay product (EDP) metrics, for different software configurations. ARM-based HPC systems have lower energy consumption albeit running ≈10 times slower.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    2
    Citations
    NaN
    KQI
    []