ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators

2021 
Building efficient embedded deep learning systems requires a tight co-design between DNN algorithms, hardware, and algorithm-to-hardware mapping. However, owing to the large joint design space, finding an optimal solution through physical implementation becomes infeasible. To tackle this problem, several design space exploration(DSE) frameworks have emerged recently, yet they either suffer from long runtimes or a limited exploration space. This work introduces ZigZag, a rapid DNN accelerator architecture-mapping DSE framework. ZigZag extends the common DSE with uneven mapping opportunities and smart mapping search strategies. Uneven mapping decouples operands(W/I/O), memory hierarchy, and mappings(temporal/spatial), opening a whole new space for DSE, and thus finding better design points compared to SotAs. For this, ZigZag uses an enhanced nested-for-loop format as a uniform representation to integrate algorithm, accelerator, and algorithm-to-accelerator mapping. ZigZag consists of three key components: 1)an analytical energy-performance-area Hardware Cost Estimator, 2)two Mapping Search Engines that support spatial/temporal even/uneven mapping search, and 3)an Architecture Generator that auto-explores the wide memory hierarchy design space. Benchmarking experiments against published works, in-house accelerator, and existing DSE frameworks, together with three case studies, show the reliability and capability of ZigZag. Up to 64% more energy-efficient solutions are found compared to SotAs, due to ZigZag's uneven mapping capabilities.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    7
    Citations
    NaN
    KQI
    []