AIPerf: Automated machine learning as an AI-HPC benchmark

2021 
The plethora of complex artificial intelligence (AI) algorithms and available high performance computing (HPC) power stimulates the expeditious development of AI components with heterogeneous designs. Consequently, the need for cross-stack performance benchmarking of AI-HPC systems emerges rapidly. The current HPC benchmarks can not reflect AI computing power without representative workloads and the current AI benchmarks have fixed problem size therefore limited scalability. To address these issues, we propose an end-to-end benchmark suite utilizing automated machine learning (AutoML) that represents real AI scenarios. More importantly, AutoML is auto-adaptive to various scales of machines with an extreme computational cost therefore a desired workload. We implement the algorithms in a highly parallel and flexible way to ensure the efficiency and optimization potential on diverse systems with customizable configurations. The major metric to quantify the performance is floating-point operations per second (FLOPS) that is measured in an analytical and systematic approach. We verify the benchmark's stability at discrete timestamps and the linear scalability on various numbers of machines equipped with up to 400 AI accelerators. With flexible workload size as well as single metric measurement, our benchmark can scale from small clusters to large AI-HPC and rank them easily. The source code, specifications and detailed procedures are publicly accessible on GitHub.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    5
    Citations
    NaN
    KQI
    []