Addressing GAN Instability with Wasserstein GAN
Are you finding it difficult to stabilize your GAN?
A GAN is stable when the accuracy of the discriminator is hovering around 0.5 for both the real and generated data. However, reaching this equilibrium requires careful tuning of both the discriminator and generator. Stabilizing a GAN can be difficult, if not impossible.
The following explains the methods that I find effective for stabilizing a GAN. I am creating my neural network models in Jupyter Notebook, and running Keras version 2.3.1 and Tensorflow version 2.0.0.
Use WGAN instead of GAN
A GAN is very sensitive to changes of the hyperparameters. This makes stabilizing a GAN difficult. Instead of tuning the hyperparameters, a more stable model can be used : the Wasserstein GAN (WGAN).
Converting a standard GAN into a WGAN is straightforward :
Replace the loss function with the Wasserstein loss function
Use labels of 1 for real data and -1 for fake / generated data
Remove the final sigmoid activation layer from the discriminator
Limit the update of weights with weight clipping, or use a gradient penalty loss function
For each epoch, train the discriminator multiple times for each training pass of the generator
Measuring GAN Accuracy
When training using a WGAN, the loss of the discriminator and generator can be monitored. If the loss converges to a value, and there are no large spikes in the loss, the WGAN is working as expected. If there are large spikes in the loss, the learning rate should be reduced to stabilize the training process.
The generated data from the WGAN can be monitored with statistical metrics. The mean and standard deviation between the real and the generated data can be compared, to get a rough estimate of the similarities between the data.
Thank you for reading. I hope you find this guide helpful for stabilizing your GAN.
Questions or comments? You can reach me at firstname.lastname@example.org
Wayne Cheng is an A.I., machine learning, and deep learning developer at Audoir, LLC. His research involves the use of artificial neural networks to create music. Prior to starting Audoir, LLC, he worked as an engineer in various Silicon Valley startups. He has an M.S.E.E. degree from UC Davis, and a Music Technology degree from Foothill College.