Resource and Performance Estimation for CNN Models using Machine Learning
2021
Field-Programmable Gate Array (FPGA) based hardware accelerators offer reconfigurability, performance, adaptability, and good energy efficiency. The majority of Convolutional Neural Network (CNN) based inference systems are initially developed using standardized frameworks like PyTorch, Tensor Flow, and more. These Python or Python-like models can be mapped on FPGAs to build accelerators. Mapping frameworks to port designs on an FPGA convert the CNN models to high-level languages like C/C++ or OpenCL so that standard tools like high-level synthesis can facilitate the mapping of models on an FPGA. The logic utilization and performance of FPGA-based accelerators are dependent on the CNN network parameters, architectural selection (data-flow, pipelined, etc.), and synthesis-based control of design generation. A scalable multi-layer CNN hardware accelerator is modeled in Vitis 2020 HLS tool. Early estimation of performance and hardware resources helps choose the best CNN network before those are executed for time-consuming high-level synthesis and physical design mapping for FPGAs. We present various Machine Learning (ML) models to estimate the Logic Utilization and Computation Time from the Python design descriptions of CNNs. Our results show a very successful and accurate estimation for performance and resource utilization over various multi-layer CNN networks in negligible time before running HLS synthesis.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
17
References
0
Citations
NaN
KQI