Deepfakes are synthetic media created by machine learning. It uses machine learning algorithms. In 2017 a reddit user called deepfakes posted a pornografic video using a swapping technology where he replaced the original face in the video with a celebrity . Till date deepfake technology is said to be used more in pornography than any other thing. But that is not were the story of deepfakes begins or ends and even succeeds.
Humans did try thier hand with other techniques using computer graphics and photoshop techniques but it was hard and cumbersome. Thankfulky, with the introduction of artificial neutral network, that mimic human brain came machine learning AI and it became easy to sythesize deepfakes. Machine learning can handle large data sets and can therefore, synthesize deepfakes videos, images and audios. Machine learning algorithms clean the raw data and produce best of the best deepfakes.
Creation of deepfakes require us to either swapp faces of real people or create a non-existing ones from thousand of faces from data sets. One needs huge data sets to train the machine learning AI. Synthetic media requires immpacable data sets for both the source and target to train and produce undectable deepfakes. There is no so much data of ordinary people in the wild so we can feel safe until such time deep learning requires Big Data Sets .
The fact that we are creating data and already in the infosphere of our information age, we still have to care. There are several no-code apps, open source software, websites that assist facial expression manipulation and facial identity manipulation. Besides, we have shallowfakes that use techniques of slowing or accelerating the videos and audios as well as mislabel the videos with an intention of decieving . Shallowfakes are mushrooming all around the web and are produced and circulated with malicious intent . But they do not use sophisticated machine learning and are not so much a threat as deepfakes.
The deepfakes require generative adversorial networks and autoencoders. What we have to invest in is detection of deepfakes. Government has to establish sythetic media forensics to detect deepfakes and ensure public safety. We have the challenge to stay on top of all kinds of AI particularly we have care about the generative synthetic media. Besides, we have to be slow in fowarding videos, images, audios uncritically.
Governments have to make it mandatory for all service providers of generative synthetic media to compulsarily label their media so that bad use is nipped in the bud. It has positive applications likes entertainment, intercultural communication and disruption of extremist groups. Bu since tits abuse will grow and become difficult to detect with the growth of technology we have work to develop detection tactics alongside the growing technology of AI machine learning. Maybe we will need AI to save us from abuse of AI.