AlphaGAN: Fully Differentiable Architecture Search for Generative Adversarial Networks.

2021 
Generative Adversarial Networks (GANs) are formulated as minimax game problems, where generators attempt to approach real data distributions by adversarial learning against discriminators which learn to distinguish generated samples from real ones. In this work, we aim to boost model learning from the perspective of network architectures, by incorporating recent progress on automated architecture search into GANs. Specially we propose a fully differentiable search framework, dubbed {\em alphaGAN}, where the searching process is formalized as a bi-level minimax optimization problem. The outer-level objective aims for seeking an optimal architecture towards pure Nash Equilibrium conditioned on the network parameters optimized with a traditional adversarial loss within inner level. Extensive experiments on CIFAR-10 and STL-10 datasets show that our algorithm can obtain high-performing architectures only with $3$ -GPU hours on a single GPU in the search space comprised of approximate $2\times 10^{11}$ possible configurations. We further validate the method on the state-of-the-art StyleGAN2, and push the score of Frchet Inception Distance (FID) further, i.e., achieving 1.94 on CelebA, 2.86 on LSUN-church and 2.75 on FFHQ, with relative improvements $3\%\sim26\%$ over the baseline architecture. We also provide a comprehensive analysis of the behavior of the searching process and the properties of searched architectures.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []