Deep Fakes: When AI turns evil
Over the past two decades, almost all fields have been impacted by the rise of AI and ML. More often than not, these developments have affected the world in a positive way, from personalized healthcare to automated customer services. Furthermore, AI has been used in structural drug discovery by predicting 3D protein structures. AI has helped medical professionals analyze lab reports to enable early diagnosis of patients. However, AI has also been used for unethical purposes, like the topic for today: Deep Fakes.
What are Deep Fakes?
Deep Fakes, maybe defined as synthetic media generated using techniques of deep learning, where an existing media content is replaced with the likeness of another individual. While the generated media contents initially were easily identifiable, the technology has improved significantly and are becoming hard to distinguish from real videos. Furthermore, the ease with which these contents can be generated, have made deepfakes a significant issue.
Let us consider the video below, where AI has been used to generate a fake video of Barack Obama. While the video has been used to generate awareness about deepfakes, one can easily see how a technology like this can be used to ruin people’s reputation or to peddle fake news.
Generative Networks
What really disheartens me is that deepfakes are result of innovations and ideas which at its core are fascinating. Earliest attempts of creating synthetic media can be attributed to autoencoders, which are used in image regeneration, noise removal etc. However contents thus generated were far from photorealistic.
Development of modern day photorealistic deepfakes, were made possible by the development of GANs (Generative Adversarial Networks) in 2014. The core idea behind GAN was to have two networks competing against each other, with the generator trying to generate photorealistic images, while the discriminator would distinguish the fake images from the real one. Over time, both would learn from each other, thus improving the quality of image generated.
Impact of Deep Fake
A research conducted by Sensity AI, determined that around 95% of Deep Fakes on the internet are nonconsensual porn. Furthermore, about 90% of these were of women. To lay it out in simple words, these are pornographic videos generated of women, without their consent or even awareness. All it takes now is just a single photo of a person, which is easily accessible, thanks to social media, to destroy their reputation.
What further concerns me is the lack of knowledge among common people regarding this. When internet came into existence, it made it easier for all of us to communicate and acted as a platform for people to share their opinions. However, the rise of internet also lead to revenge porn. While there are laws in place now against revenge porn, many don’t account for deepfakes. Therefore there are few laws in place to help the victim of such attacks.
Here I would like to discuss, Helen Mort, a victim of such deepfake pornography. An old photo of hers, from social media was shared on explicit websites and were modified using deepfakes. As the content was modified, this wasn’t considered as an offence of non-consensually distributing a private sexual photograph.
Where are we headed?
It is hard to give an exact answer to this question. Many researchers believe that we are going to see a rise in such attacks and there are already evidences pointing towards it. It is becoming easier to generate such videos for layman without needing to understand the workings behind it. In 2019, an application named DeepNude was being circulated around, which removed clothings from an input image. While the application was removed almost immediately, replicas are being actively circulated in telegram.
The first step is to make sure we have laws in place that account for such attacks, and protects the victim. From an Indian perspective, there are no laws in place that directly deal with deepfakes. The fight against deepfakes can be in described as right to privacy. The supreme court, in 2017, acknowledged that Right to Privacy is a fundamental right for Indian Citizens. Besides having a separate law for deepfakes, we need to make the public aware about what deepfakes are and the unethical ways in which it has been used.
Now, I certainly am not against the core tech behind deepfakes or against open-source principles. GANs have a lot of real life applications and have been used to even recreate historical events to give us some insight into what key events would have looked like. Open source principles are also very important to develop technologies for the future. However it is necessary that we understand the impact our applications will have on common people and how we can mitigate it.