A Variational Inequality Perspective on Generative Adversarial Networks

2019 
Stability has been a recurrent issue in training generative adversarial networks (GANs). One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods specifically designed for this adversarial training. In this work, we review the "variational inequality" framework which contains most formulations of the GAN objective introduced so far. Taping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend standard methods designed for variational inequalities to GANs training, such as a stochastic version of the extragradient method, and empirically investigate their behavior on GANs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    81
    References
    108
    Citations
    NaN
    KQI
    []