FlexiGAN: An End-to-End Solution for FPGA Acceleration of Generative Adversarial Networks

2018 
Generative Adversarial Networks (GANs) are among the frontiers of deep networks. GANs consist of two models, a generative model and a discriminative model. While the discriminative model uses the conventional convolution operator, the generative model is fundamentally different per its use of the transposed convolution operator. Unlike the conventional convolution, the transposed convolution initially inserts a large number of zeros in its input. This zero-insertion leads to a large number of inconsequential operations and creates different patterns of computation across the sliding windows. The inconsequential operations along with the variation in computation patterns lead to signicant resource underutilization when evaluated using conventional convolution hardware. This paper introduces FlexiGAN, an end-to-end solution, from high-level GAN specication to an optimized synthesizable FPGA accelerator. FlexiGAN framework is coupled with a novel architecture that aims to harness the benets of both MIMD and SIMD execution models. The proposed architecture separated data retrieval and data processing units at the nest granularity of each compute engine. Leveraging the separation between data retrieval and data processing units in the compute engines, we introduce a succinct set of operations that enable us to signicantly reduce the on-chip memory usage, which is generally scarce in FPGAs. We evaluate our end-to-end solution across various GANs from machine learning literature. FlexiGAN provides 2.4 higher performance than an optimized conventional convolution design. In addition, FlexiGAN, on average, yields 2.8 (up to 3.7) improvements in Performance-per-Watt over a high-end GPU. These results indicate that FlexiGAN is an effective initial step towards providing an end-to-end solution for accelerating GANs
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    32
    Citations
    NaN
    KQI
    []