Deepfakes have been used in pornography for a long time to tarnish the image of pornstars by replacing their faces with those of celebrities
Ground Report | New Delhi: Deepfake technology is affecting people and society and can lead to disastrous consequences if it is not controlled in time. Deep Fake is a far more developed and dangerous form of false news. Misuse of any technology can prove disastrous. DeepFakes and false news are also an example of this. This can ruin the political, social, economic life of a person. Its use can prove to be dangerous in many fields.
The misuse of techniques like deep fakes can pose a great threat to national security along with political and social instability. Widespread instability caused by the manipulation of publicly available information through deep fakes and the spread of misinformation by those in the highest positions of power, those with geopolitical aspirations, violence, or those motivated by economic interests can be generated.
What is Deepfake?
We have understood one of the major ill effects of the internet and social media is fake news. But Deep Fake is a far more developed and dangerous form of false news. It has emerged as a new option for spreading misinformation and rumours. While ordinary fake news can be checked in many ways, it is extremely difficult for a layman to spot deep fakes.
Deep Fake is a combination of ‘Deep Learning’ and ‘Fake’ in which artificial intelligence is used to create a fake replica of a media file such as images, audios, and videos, which look and sound like the original file.
Through deepfakes, an individual, institution, business, and even a democratic system can be damaged in many ways.
Widespread interference with media files (such as face-changing, lip-syncing, or other physical activity) can be carried out through deep fakes and in most cases without prior permission of the public, which may lead to psychological, security, political instability, and occupational poses a risk of interference.
Harm the image and dignity of celebrities
Deepfakes are widely used to create sexually explicit images or videos in which celebrities are misrepresented in spoofs. And celebrities like popular actors or stars of the music industry are primarily targeted when creating deepfakes fake adult videos or images. Creators turned actresses’ faces into videos of adult movie stars, to transfer sexual fantasies from people’s imaginations to the Internet.
From Bollywood to Hollywood, some of the popular celebrities who have been targeted in deepfakes. And before experts find out the reality of a deepfake video, it goes viral on the internet and affects a large number of people.
However, many adult websites also use deepfake video detection tools to remove such videos on a daily basis. But the problem is in most countries, there is no law yet to deal with this kind of content, which makes it difficult to control and badly affects the image and dignity of celebrities.
Deepfakes have been used in pornography for a long time to tarnish the image of pornstars by replacing their faces with those of celebrities. Women are targeted in most situations by making deepfake videos. Such videos are used for the purpose of causing mental and social harm to a person. Apart from this, they are also used to intimidate a person, etc.
How does it work?
Artificial intelligence is used to transfer words, body movements, or expressions spoken by one person onto another person. This can be made more ‘reliable’ by using Generative Adversarial Networks (GAN). In most situations, it becomes very difficult to know whether the media shown is real or fake. The software development open source community GitHub is home to a huge amount of software capable of creating deepfakes.
How to recognise Fake News
Do not immediately believe the information seen in any suspicious source like social media, suspicious websites, etc. You can confirm such news by searching Google. Apart from this, do not share any suspicious news on social media until it is confirmed. It has become common in the present scenario to spread false news by making a misleading image through Photoshop, so any such picture can be checked using Google Image Search.
On the other hand, if we talk about deepfake technology, it is developing highly with time, due to which it will be very difficult for a common person to recognize it. Therefore, in such a situation, the responsibility of the governments increases to make arrangements for strict regulation in this area, promote social awareness and also promote the development of technology needed to detect deepfake media. Although efforts are being made globally in this area, recently researchers from Stanford University and UC Berkeley have developed a program that is able to identify deep fake videos using artificial intelligence.