language-icon Old Web
English
Sign In

QPACE

QPACE (QCD Parallel Computing on the Cell Broadband Engine) is a massively parallel and scalable supercomputer designed for applications in lattice quantum chromodynamics. QPACE (QCD Parallel Computing on the Cell Broadband Engine) is a massively parallel and scalable supercomputer designed for applications in lattice quantum chromodynamics. The QPACE supercomputer is a research project carried out by several academic institutions in collaboration with the IBM Research and Development Laboratory in Böblingen, Germany, and other industrial partners including Eurotech, Knürr, and Xilinx. The academic design team of about 20 junior and senior scientists, mostly physicists, came from the University of Regensburg (project lead), the University of Wuppertal, DESY Zeuthen, Jülich Research Centre, and the University of Ferrara. The main goal was the design of an application-optimized scalable architecture that beats industrial products in terms of compute performance, price-performance ratio, and energy efficiency. The project officially started in 2008. Two installations were deployed in the summer of 2009. The final design was completed in early 2010. Since then QPACE is used for calculations of lattice QCD. The system architecture is also suitable for other applications that mainly rely on nearest-neighbor communication, e.g., lattice Boltzmann methods. In November 2009 QPACE was the leading architecture on the Green500 list of the most energy-efficient supercomputers in the world. The title was defended in June 2010, when the architecture achieved an energy signature of 773 MFLOPS per Watt in the Linpack benchmark. In the Top500 list of most powerful supercomputers, QPACE ranked #110-#112 in November 2009, and #131-#133 in June 2010. QPACE was funded by the German Research Foundation (DFG) in the framework of SFB/TRR-55 and by IBM. Additional contributions were made by Eurotech, Knürr, and Xilinx. In 2008 IBM released the PowerXCell 8i multi-core processor, an enhanced version of the IBM Cell Broadband Engine used, e.g., in the PlayStation 3. The processor received much attention in the scientific community due to its outstanding floating-point performance. It is one of the building blocks of the IBM Roadrunner cluster, which was the first supercomputer architecture to break the PFLOPS barrier. Cluster architectures based on the PowerXCell 8i typically rely on IBM blade servers interconnected by industry-standard networks such as Infiniband. For QPACE an entirely different approach was chosen. A custom-designed network co-processor implemented on Xilinx Virtex-5 FPGAs is used to connect the compute nodes. FPGAs are re-programmable semiconductor devices that allow for a customized specification of the functional behavior. The QPACE network processor is tightly coupled to the PowerXCell 8i via a Rambus-proprietary I/O interface. The smallest building block of QPACE is the node card, which hosts the PowerXCell 8i and the FPGA. Node cards are mounted on backplanes, each of which can host up to 32 node cards. One QPACE rack houses up to eight backplanes, with four backplanes each mounted to the front and back side. The maximum number of node cards per rack is 256. QPACE relies on a water-cooling solution to achieve this packaging density.

[ "Lattice QCD", "Grid network" ]
Parent Topic
Child Topic
    No Parent Topic