Determining learning direction via multi-controller model for stably searching generative adversarial networks
2021
Abstract The data generated by Generative Adversarial Network (GAN) inevitably contains noise, which can be reduced by searching and optimizing the architecture of GAN. To search for generative adversarial networks architectures stably, a neural architecture search (NAS) method, StableAutoGAN, is proposed based on the existing algorithm, AutoGAN. The stability of conventional reinforcement learning (RL)-based NAS methods for GAN is adversely influenced by the uncertainty of direction, where the controller will go forward once receiving inaccurate rewards. In StableAutoGAN, a multi-controller model is employed to mitigate this problem via comparing the performance of controllers after receiving rewards. During the search process, each controller independently learns the sampling policy. Meanwhile, the learning effect is measured by the credibility score, which further determines the usage of controllers. Our experiments show that the standard deviation of Frchet Inception Distance (FID) scores of the GANs discovered by StableAutoGAN is approximately 1/16 and 1/8 of that by AutoGAN on CIFAR-10 and on STL-10 respectively, while the effects remain similar to AutoGAN.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
45
References
0
Citations
NaN
KQI