FPGA Acceleration of Generative Adversarial Networks for Image Reconstruction

2021 
Accurate and efficient Machine Learning algorithms are of vital importance to many problems, especially on classification or clustering tasks. In recent years, a new class of Machine Learning has been introduced called Generative Adversarial Network (GAN) which relies on two neural networks: a generative network (generator) and a discriminative network (discriminator). These two networks compete with each other with aim to generate new data such as images. For example, a GAN is capable of reconstructing an image which is filled by noise or has some regions damaged. Image reconstruction has found its application in the field of computer vision, augmented reality, human computer interaction and animation as well as medical imaging. However, this type of algorithm requires many MAC (multiply-accumulate) operations and high power consumption to operate. In this work, we implement an Image reconstruction algorithm with GANs, specifically as a case study we train a model capable of restoring clothing images based on the fashion-MNIST dataset. Additionally, we implement and accelerate it on a Xilinx FPGA SoC which as platforms are proven to address these kind of problems very efficiently in terms of performance and power. The design also achieves better performance and power efficiency from CPU and GPU with 0.013 ms average reconstruction time per image and 43 db PSNR on the FPGA quantized configuration.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []