GANs : Making Fakes Look Real
Apr 27, 2020What is a GAN?
GANs or Generative Adversarial Networks were invented by Ian Goodfellow [1] in the year 2014. In a GAN architecture two neural networks are in an adversarial role (therefore the name adversarial network), where we have a generator and a detector or a discriminator. The generator attempts to create a dataset such that the discriminator isn't able to detect if the data is real (from any given distribution) or it is fake (generated by the generator). The generator is given a random initial input or noise from which it learns to generate the data, this generated data is then fed into the discriminator, where we simultaneously input real-data. The discriminator here generates a label (in reality it is a probability distribution) for the fake (or the generated data from generator) and the real dataset. A basic flow can be seen in the following figure :
How Does it Really Learn ?
For any type of learning to take place we need to decide an objective and through it a loss function such that we are able to penalise the network for making wrong decisions. On similar grounds we penalise the discriminator for making wrong decisions with a loss that can be as simple as follows:
(We expect the value to be 1 if the data is real and 0 if the data is fake or generated)
Notice that this is a very simple illustration for loss function for the discriminator for better understanding. Our discriminator will output two values, these two values will essentially be the probabilities (preal^d and pfake^d are the two probability output from the discriminator for real data and generated data respectively) of the data being real or fake and we penalise the discriminator for having making wrong prediction (labeling real data fake or fake data real). Similarly we will have a generator loss function that can be simply:
A simple GAN with simple multi-layer perceptron and the loss function as we discussed, is able to generate decently identifiable images, these are multiple outputs for number 3 that the generator has learnt from the MNIST dataset.
What are they Really Capable Of?
This of course is a mere representation of the kind of results that we can generate with GANs the new generation of the networks is able to generate such state of the art images that it becomes really difficult to distinguish between the real and generated images, for example BIGGANs[4] are able to generate such detailed images that the results are a close approximation of the real world, as is evident from the image below from the original paper on BIGGANs.
BIGGANs are almost like reality the images are of highest quality, they need a lot of time to train but the quality of the images more than compensates for the time. This high quality is not a unique feature of the BIGGANs alone.
Possible Applications for GANs are Manifold
- For generation of animation characters.
- For creating sharper higher resolution images from low resolution images.
- For generating data distributions in domains where we lack data for the process and the generation of new data is not possible GANs can be helpful in creating new data that can be used to train other networks
- GANs can be used for domain adaptation of networks that are trained in simulated environment.
- GANs (deep fakes) etc, can be used for motivation in say fitness industry or for better VFX in real time for the movie industry.
Deep Fakes?
Deep fakes is a variant of the standard GAN networks that work with the very narrow goal of achieving reconstruction of the features (mostly face) from a given set of features (that is a different person). The following image was generated by a deep fake network, where facial features from two people were mixed and the final images after initial swaps are completely different from the original appearances.
This of course is a cause of concern when put to destructive use, for example it can be used for generating fake videos and images of public figures which can be used to mislead public opinion on important policy matters. Few such incidents have already been in the limelight, like Former President Obama’s fake video [7]. Such instances combined with the menace of fake news, threaten the democratic processes in various countries along with the personal reputation of individuals.
With the capabilities and resources required to generate deep fakes becoming easily accessible the misuse might spread and cause significant damage. Luckily there are alternative that are already coming into effect. There are networks and mechanisms [8] that are able to detect whether or not a given video has been generated using deep fakes. Having said that the technologies across the spectrum will improve and it is important that we keep a vigil and prevent misuse of the technology for destructive purposes.
In the words of William Kingdon Clifford – “ The danger to society is not merely that it should believe wrong things but that it should lose the habit of testing things and inquiring into them.”
To Learn more about GAN's Click Here
References:
[1]. https://arxiv.org/abs/1406.2661
[3]. https://github.com/chahalinder0007/Pytorch_GAN/blob/master/pytorch_GAN.ipynb
[4]. https://arxiv.org/pdf/1809.11096.pdf
[5]. https://towardsdatascience.com/must-read-papers-on-gans-b665bbae3317
Stay connected with news and updates!
Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.
We hate SPAM. We will never sell your information, for any reason.