Evaluation of the Impact of Direct Warm-Water Cooling of the HPC Servers on the Data Center Ecosystem

2014 
The last 10 years we have witnessed a rapid growth of the computational performance of servers used by the scientific community. This trend was especially visible in the HPC scene, where the price per FLOPS decreased, while the packing density and power consumption of the servers increased. This, in turn changed significantly challenges and costs of keeping the environmental conditions. Currently operational costs, mainly the power bill, over the lifetime of a computing system overshadow the acquisition costs. In addition, the overheads on the consumed power introduced by the need of cooling the systems may be as big as 40%. This is a huge portion of the costs, therefore, optimizations in this area should be beneficial in terms of both economy and efficiency. There are many approaches for optimizations of the costs, mainly focusing on the air cooling. Contrary to these have we decided to scrutinize a different approach. We planned to use warm up to 45 i¾?C inlet temperature as the cooling medium for computing cluster and check if using this way of cooling can introduce significant savings and, at the same time, we can simplify the cooling infrastructure making it more robust and energy efficient. Additionally, in our approach we tried to use variable coolant temperature and flow to take maximum advantage of so called free cooling, minimizing the power consumption of the server-cooling loop pair. To validate the hypothesis PSNC Poznan Supercomputing and Networking Center built a customized prototype system which consists of hybrid CPU and GPU computing cluster, provided by the company Iceotope, along with a customized, highly manageable and instrumented cooling loop. In the paper we analyze the results of using our warm-water liquid cooled system to see if and, if it is the case, what are the positive and negative consequences for the data center ecosystem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    1
    References
    3
    Citations
    NaN
    KQI
    []