A deepfake video that seemingly features Indian actress Rashmika Mandanna has been circulating on social media, garnering millions of views. However, upon closer inspection, it has been revealed to be a fabrication. The video, which initially appears to be of Mandanna entering an elevator, is actually a manipulated version of a video featuring a woman named Zara Patel.
The original video was posted on Instagram on October 8 and there’s no evidence to suggest that Patel had any involvement in the creation of the deepfake. The identity of the individual or group who created the manipulated video remains unknown.
Deepfake video of Rashmika Mandana
In the deepfake, Patel’s face morphs into that of Mandanna after about a second. While the manipulation might seem obvious when the real and fake videos are compared side-by-side, it’s not immediately apparent when viewed quickly in a social media feed.
The rise of deepfake videos has posed significant challenges in recent years. These tools enable virtually anyone to create convincing videos using someone else’s face, placing them in scenarios they’ve never been in. This has led to serious issues, such as the creation of fake nude images of students at a New Jersey high school using AI-assisted technology.
Even seemingly harmless manipulations flooding platforms like X, Facebook, TikTok, and Instagram can undermine our trust in visual content. With the 2024 presidential election on the horizon, the prevalence of political deepfakes is expected to rise.
Fakery is not a new phenomenon and predates the internet. However, the rapid advancement of tools for creating fake images means that we can no longer trust everything we see. This incident serves as a stark reminder of the need for legal and regulatory frameworks to address the issue of fake images online.
What is Deepfake?
We have understood one of the major ill effects of the internet and social media is fake news. But Deep Fake is a far more developed and dangerous form of false news. It has emerged as a new option for spreading misinformation and rumours. "While one can check ordinary fake news in many ways, spotting deep fakes proves extremely difficult for a layman."
Artificial intelligence combines 'Deep Learning' and 'Fake' to create Deep Fake, a fake replica of a media file such as images, audios, and videos, which look and sound like the original file.
Deepfakes can damage an individual, institution, business, and even a democratic system in many ways. Deep fakes can carry out widespread interference with media files (such as face-changing, lip-syncing, or other physical activity), mostly without the public's prior permission. This leads to psychological, security, political instability, and occupational risks of interference.
Keep Reading
- Know how much military contributes to global carbon emissions!
- LGBTQ in the Army: Countries with most inclusive Military
- India among five countries with highest military expenditure in the world
- US may use Pakistan airspace for military operations in Afghanistan
Follow Ground Report for Climate Change and Under-Reported issues in India. Connect with us on Facebook, Twitter, Koo App, Instagram, Whatsapp and YouTube. Write us on [email protected].