Generative Models

This post is part of the code that I released on github written in Tensorflow. This post summarizes on the problem of modelling a given data distribution using Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) and comparing the performance of these models. One might ask, why generate images from a given data distribution when we already have millions of images around ? As pointed out by Ian Goodfellow in the NIPS tutorial, there are many applications indeed. One which I found very interesting was the use of GANs (once we have perfected them) in simulating possible futures for an agent in Reinforcement Learning using Policy Gradients.

This post is organised as follows:

VAEs

Variational Autoencoders (VAEs) can be used to model the prior data distribution. This consists of two parts namely, encoder and decoder. Encoder consists of mapping the high level features of a data distribution into a low level representation of that data. This low level representation is called latent vector. And on the other hand, we have decoder, which takes in this low level representation of data and produces an high level representation of the same.

Mathematically, let be the input to the encoder, be the latent vector and be the output of decoder which has the same dimensions as of .

Visually VAE can be thought of as shown in Fig 1.

VAE

Fig 1: Architectural View of VAE (or say a typical Autoencoder)

Hmmm, how is this any different from the standard Autoencoder ? The key difference arises in the restriction we put on the latent vector. In case of standard Autoencoder, we are just focussed on the reconstruction loss i.e,

whereas in case of Variational Autoencoders, we expect the latent vector to follow a certain distribution, usually (unit Gaussian distribution) and this results in the optimization of following loss,

Here, where is the identity matrix and is the distribution of the latent vector where , where are computed by the Neural Network. is the KL divergence from the distribution to .

With the additional term in the loss function, there is trade-off between how good the model generates images and how close the distribution of latent vector is to an unit gaussian distribution. These two components can be controlled by two hyperparameters say .

GANs

GANs are yet another way of generating data from a given prior distribution. These consist of training two parts simultaneously i.e, the Discriminator and Generator.

Discriminator is used to classify whether an image is “real” or “fake” and on the other hand, Generator, as the name suggests, generates images from random noise (often called latent vector or code and this noise is typically drawn from either uniform distribution or Gaussian distribution). The task of the generator is to generate such images so that the discriminator is unable to distinguish the “real” images from “fake” images. As it turns out, the Generator and the Discriminator are in contradiction with one another. Discriminator tries real hard to distinguish real and fake images and at the same time, Generator tries to produce images that look more and more real which forces the Discriminator to classify these images as “real”.

Typical structure of a GAN looks as shown in Fig 2

GAN

Fig 2: Overview of GAN

Generator consists of deconvolution layers (transpose of convolutional layers) which produce images from code. Fig 3 describes the architecture of the network.

Generator

Fig 3: Generator of a typical GAN (Image taken from OpenAI)

Difficulties arising from training plain GANs

There are number of challenges in training plain GANs, one of the significant one that I found is, the sampling of the latent vector/code. This code is merely a noise sampled from prior distribution over latent variables. There have been methods to overcome this challenge. These methods include the use of an VAE which encodes the latent variables and learns the prior distribution of the data that is to be generated. This sounds much better because the Encoder is able to learn the distribution of the data and now we can sample from this distribution rather than generating random noise.

Training Details

We know that cross-entropy between two distributions (true distribution) and (estimated distribution) is given by:

For binary classification,

For GANs, it’s is assumed that half the distribution comes from true data distribution and the other half from estimated distribution and hence,

Training GAN involves optimizing two loss functions simultaneously.
Following minimax game,

Discriminator here is just concerned with classifying whether an image is real or fake and no attention is paid to whether the image contains actual objects or not. This is evident when we examine the images generated by GAN on CIFAR (see below).

We can redefine the discriminator loss objective to include labels. This has proven to improve the subjective sample quality.
eg : on MNIST or CIFAR-10 (both having 10 classes each)

Implementation of the above losses in python and tensorflow is as follows:

  
  def VAE_loss(true_images, logits, mean, std):
      """
        Args:
          true_images : batch of input images
          logits      : linear output of the decoder network (the constructed images)
          mean        : mean of the latent code
          std         : standard deviation of the latent code
      """
      imgs_flat    = tf.reshape(true_images, [-1, img_h*img_w*img_d])
      encoder_loss = 0.5 * tf.reduce_sum(tf.square(mean)+tf.square(std)
                     -tf.log(tf.square(std))-1, 1)
      decoder_loss = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(
                     logits=logits, labels=img_flat), 1)
      return tf.reduce_mean(encoder_loss + decoder_loss)
  
  
  def GAN_loss_without_labels(true_logit, fake_logit):
      """
        Args:
          true_logit : Given data from true distribution,
                      `true_logit` is the output of Discriminator (a column vector)
          fake_logit : Given data generated from Generator,
                      `fake_logit` is the output of Discriminator (a column vector)
      """

      true_prob = tf.nn.sigmoid(true_logit)
      fake_prob = tf.nn.sigmoid(fake_logit)
      d_loss = tf.reduce_mean(-tf.log(true_prob)-tf.log(1-fake_prob))
      g_loss = tf.reduce_mean(-tf.log(fake_prob))
      return d_loss, g_loss
  
  
  def GAN_loss_with_labels(true_logit, fake_logit):
      """
        Args:
          true_logit : Given data from true distribution,
                      `true_logit` is the output of Discriminator (a matrix now)
          fake_logit : Given data generated from Generator,
                      `fake_logit` is the output of Discriminator (a matrix now)
      """
      d_true_loss = tf.nn.softmax_cross_entropy_with_logits(
                    labels=self.labels, logits=self.true_logit, dim=1)
      d_fake_loss = tf.nn.softmax_cross_entropy_with_logits(
                    labels=1-self.labels, logits=self.fake_logit, dim=1)
      g_loss = tf.nn.softmax_cross_entropy_with_logits(
                    labels=self.labels, logits=self.fake_logit, dim=1)

      d_loss = d_true_loss + d_fake_loss
      return tf.reduce_mean(d_loss), tf.reduce_mean(g_loss)
  

Experiments of VAEs and GANs on MNIST

#1 Training Discriminator without using labels

I trained a VAE on MNIST. The code of which can be found here. MNIST consists of binary 28 28 images.

In the following images,
Left: Grid of 64 original images from the data distribution
Middle: Grid of 64 images generated from VAE
Right: Grid of 64 images generated from GAN

Iteration 1

Iteration 2

Iteration 3

Iteration 4

Iteration 100

Last epoch of VAE (125) and of GAN (368)

Below is the gif of the images generated from GAN as a function of number of epochs. (The model was trained for 368 epochs)

Clearly, images generated from VAE are kind of blurry compared to the ones generated from GAN which are much sharp. Well, this shouldn’t come as a surprise and the result is expected. This is because, of all the possible outcomes VAE model generates from the distribution, it averages them. To reduce the bluriness in the images, loss can be employed instead of loss.

#2 Training Discriminator using labels

Coming soon…

Experiments of VAEs and GANs on CIFAR

Coming soon…

Further Reading