Generative adversarial networks with mixture of t-distributions noise for diverse image generation

2019 
Abstract Image generation is a long-standing problem in the machine learning and computer vision areas. In order to generate images with high diversity, we propose a novel model called generative adversarial networks with mixture of t-distributions noise (tGANs). In tGANs, the latent generative space is formulated using a mixture of t-distributions. Particularly, the parameters of the components in the mixture of t-distributions can be learned along with others in the model. To improve the diversity of the generated images in each class, each noise vector and a class codeword are concatenated as the input of the generator of tGANs. In addition, a classification loss is added to both the generator and the discriminator losses to strengthen their performances. We have conducted extensive experiments to compare tGANs with a state-of-the-art pixel by pixel image generation approach, pixelCNN, and related GAN-based models. The experimental results and statistical comparisons demonstrate that tGANs perform significantly better than pixleCNN and related GAN-based models for diverse image generation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    8
    Citations
    NaN
    KQI
    []