- Introduction — What are Deepfakes?
- Examples of Malicious Deepfakes
- Autoencoders
- Media Literacy and Detecting Deepfakes
- Wrapping Up
The act of photo manipulation is an old one. It was has been used to colorize old WW1 images, but it has also been used for propaganda; Josef Stalin infamously manipulated photos so that his political opponent Leon Trotsky did not appear in important settings. Photo manipulation has been used for over 100 years to both captivate and deceive viewers.
Moving to the current time period, we consume not only images but also video and audio in our daily lives. The internet has facilitated a dramatic increase in video and audio sharing by third party individuals and organizations, in contrast to fixed TV and radio channels in the past. This is great for gaining new perspectives. However, it has together with innovations in artificial intelligence lead us to a new scary concept: deepfakes.
Deepfakes are synthetic media that have altered or generated, with the use of deep neural networks, so that the content is fake. This could be images and speech, but increasingly also videos. There are legitimate use-cases for deepfakes within video production. This could be aging/de-aging an actor for a role that spans a long time within the narrative. It could also be used as an alternative to reshooting close-up scenes where only minor changes are necessary. Yet, deepfakes presents more malicious applications than legitimate ones as of now.
In this blog post, I will give you some context for how deepfakes can be used maliciously. I will discuss autoencoders since this is one of the most common ways deepfakes are generated. Finally, I will talk about how media literacy and common sense is one of the most effective ways that we can combat deepfakes.
Unfortunately, the majority of deepfake applications have a negative effect on…