logo
    Energy-Optimal Configurations for Single-Node HPC Applications
    8
    Citation
    27
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    Energy efficiency is a growing concern for modern computing, especially for HPC due to operational costs and the environmental impact, considering that processors have an important role in this energy consumption. In this work, we propose a methodology to find energy-optimal frequency and number of active cores to run single-node HPC applications using an application-agnostic power model of the architecture and an architecture-aware performance model of the application. We characterize the application performance using machine learning, specifically the "Support Vector Regression" algorithm. Besides that, the power consumption is estimated by modeling CMOS dynamic and static power without knowledge of the application. So, The energy-optimal configuration is estimated by minimizing the product these two models outcomes, the power model and the performance model. Then, the final model can be used to find better frequency and number of cores to aim energy efficiency application execution. Results were obtained for four PARSEC applications and, with five different inputs shows that the proposed approach used substantially less energy when compared to the DVFS governor, in best cases and worst cases.
    Keywords:
    Dynamic demand
    Industrial enterprises have significant negative impacts on the global environment. Collectively, from energy consumption to greenhouse gases to solid waste, they are the single largest contributor to a growing number of planet-threatening environmental problems. According to the Department of Energy's Energy Information Administration, the industrial sector consumes 30% of the total energy and the transportation sector consumes 29% of the energy. Considering that a large portion of the transportation energy costs is involved in moving manufactured goods, the energy consumption of the industrial sector could reach nearly 45% of the total energy costs. Hence, it is very important to improve the energy efficiency of our manufacturing enterprises. In this chapter, we outline several different strategies for improving the energy efficiency in manufacturing enterprises. Energy efficiency can be accomplished through energy savings, improved productivity, new energy generation, and the use of enabling technologies. These include reducing energy consumption at the process level, reducing energy consumption at the facilities level, and improving the efficiency of the energy generation and conversion process. The primary focus of this chapter is on process level energy efficiency. We will provide case studies to illustrate process level energy efficiency and the other two strategies.
    Energy accounting
    Consumption
    Citations (2)
    The development of supercomputer is very fast,not only the speed changes with each passing day,but also the structure becomes more and more complicated and diversiform.In order to describe supercomputer's structure rightly and availably,and give the uniform description method,this paper brings forward the multi-level structure and description of supercomputer.It can ground in farther abstracting or programming supercomputer simulator and help to design supercomputer system.
    Citations (0)
    According to the increasing demands of supercomputer, an exclusive supercomputer building is requested to install a supercomputer for promoting high-end R&D as well as creating the public service infrastructure in the national level. KISTI, as a public supercomputer center with the 4th supercomputer (capacity of 360Tflops), is experiencing shortage of infrastructure systems, caused by increased capacity. Thus, it is anticipated that the situation will be growing serious when the 5th and 6th supercomputers will be installed. On this study, analyzed on the 5th supercomputer system through projecting performance level and optimal operating environments by assessing infra-capacity. Explored way to construct optimal operating environments through infrastructure-capacity analysis of supercomputer center. This study can be of use for reviewing KISTIs conditions as the only supercomputer center in Korea. In addition, it provides reference data for planning the new exclusive supercomputer center in terms of feasibility, while analyzing infrastructure systems.
    Economic shortage
    Citations (0)
    High Performance Computing is now one of the emerging fields in computer science and its applications. Top HPC facilities, supercomputers, offer great opportunities in modeling diverse processes thus allowing to create more and greater products without full-scale experiments. Current supercomputers and applications for them are very complex and thus are hard to use efficiently. Performance monitoring systems are the tools that help to understand the efficiency of supercomputing applications and overall supercomputer functioning. These systems collect data on what happens on a supercomputer (performance data, performance metrics) and present them in a way allowing to make conclusions about performance issues in programs running on the supercomputer. In this paper we give an overview of existing performance monitoring systems designed for or used on supercomputers. We give a comparison of performance monitoring systems found in literature, describe problems emerging in monitoring large scale HPC systems, and outline our vision on future direction of HPC monitoring systems development.
    Citations (3)
    Предложен метод проведения анализа эффективности и оптимизации суперкомпьютерных приложений, примененный на практике для изучения задач одного из пользователей суперкомпьютера Ломоносов2. Этот метод затрагивает различные этапы исследования задач, начиная от изучения общего поведения всех запусков пользователя на суперкомпьютере и заканчивая детальным изучением и оптимизацией исходного кода выбранной программы. Приведено описание общих этапов анализа, которые были выполнены на практике, показаны метрики производительности, на которые следует обратить внимание при выполнении подобного анализа, а также продемонстрированы конкретные примеры поведения задач и эффект от оптимизации, выполненной для задачи расчета жидкокристаллических капель. A method for the efficiency analysis and optimization of supercomputer applications applied earlier in practice to study jobs of a user on the Lomonosov2 supercomputer is proposed. This method involves various stages of the jobs research, starting from studying the general behavior of all user launches on a supercomputer and ending with a detailed study and optimization of the source code of a selected program. The paper describes the general stages of the analysis that were carried out in practice, shows performance metrics that should be paid attention to when performing such an analysis, and shows also some specific examples of the job behavior and the effect of optimization carried out for the task of calculating liquid crystal droplets.
    Code (set theory)
    Citations (1)
    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.
    In keeping with the 'Voyages of Discovery' theme of the Supercomputing 1992 conference, representatives of supercomputing endeavours from around the world have met to speak on national and international supercomputing activities. This minisymposium brings together international representatives from five areas of the world to discuss supercomputing activities in countries that have been underrepresented at the Supercomputing conferences in the past. The topics discussed are high-performance computing and networking in Europe, the supercomputing environment in Taiwan, supercomputing in Australia, India's initiative in massively parallel supercomputing, and supercomputing in Brazil. >
    Citations (1)
    The next generation supercomputer R & D project started from January, 2006. The Supercomputer will start operation in April, 2011 and will be enhanced its performance during F.Y.2011. It is currently on the progress of design work and its effective performance is planed as > 1.0 Peta FLOPS. As the Earth simulator, this supercomputer will be fastest and larges in Japan and it will be very difficult for other supercomputer to process and visualize the computed results on it. Therefore, we are planning to have an attached visualization system.
    FLOPS
    Supercomputing centers have been paying expenses mainly for massive energy consumption required for supercomputer management. Energy consumption is expected to increase with continuous growth of supercomputer use in the future. Low-power processor based supercomputers have been considered as an alternative that can reduce the energy consumption. It was reported that the use of low-power processors in a supercomputer could reduce about 10% of the total energy consumption of a supercomputing center. While many researchers have been endeavored to investigate the advantages of low-power processor use in supercomputing systems, there are limited number of studies examining their service life in the field. In fact, low power processors in a supercomputer should be susceptible to thermo-mechanical and electrical stresses due to their exposure to harsh use conditions compared to their intended conditions, i.e. mobile device applications. These stresses can result in physical failure of low power processors, malfunction of supercomputers, and eventually reduction of supercomputer service life. This paper focuses on estimating the service life of supercomputing systems based on low-power processors.
    Citations (0)