Deepfake Fraud in India: Why Organisations Face New Risks from AI Scams

Indian corporate executive faces deepfake fraud threat in a modern office, surrounded by AI-generated holograms, biometric security icons, and alert notifications. Highlights cybersecurity and artificial intelligence risks for businesses in India, 2025.

When Your Boss on Video Isn’t Really There

It was an ordinary workday for a finance professional in Hong Kong—until their CFO appeared on a live video call, flanked by senior colleagues, demanding an urgent $25.6 million transfer. His voice, his face, his mannerisms—all exactly right.

Except he was never there.
Neither were the colleagues.
Every face, every voice, every gesture was a deepfake.

By the time the truth was uncovered, the money had vanished.

This isn’t science fiction. It’s already happening across Asia—and India may be next in line.

The New Face of Fraud

Deepfake scams have escalated from manipulated celebrity videos to real-time impersonations of leaders, colleagues, and family members. Traditional fraud controls—voice authentication, video calls, even face recognition—are failing.

OpenAI CEO Sam Altman put it bluntly at a Federal Reserve conference: AI has defeated most current methods of human authentication.

The warning isn’t theoretical. Deepfake scams jumped 1,740% in North America in 2023. Losses in the U.S. are projected to hit $40 billion by 2027—triple the 2023 figure.

India is already feeling the heat. Deepfake incidents rose 280% year-on-year between 2023 and 2024. Over 75% of Indians online have encountered deepfake content, and 38% report experiencing deepfake-related scams.

The fraud crisis is no longer impending. It’s here.

Regulators Are Responding—But It’s Just the Beginning

In the U.S., the Federal Reserve is collaborating with AI firms like OpenAI to build deepfake detection tools, while the New York State Department of Financial Services has warned companies about AI-enabled social engineering scams.

India has launched its own multi-layered defense:

  • March 15, 2024: The Ministry of Electronics & IT issued an Advisory on Anti‑Deepfake and Synthetic Media, requiring platforms to detect, label, watermark, and remove manipulated content, and to disclose potential unreliability.
  • November 27, 2024: CERT‑In issued a  high‑severity advisory—its most urgent alert level—signaling that deepfakes were no longer a theoretical risk but already being exploited. It recommended immediate countermeasures, including employee verification protocols and awareness training.
  • August 8, 2025: Parliament unveiled a cybercrime readiness plan anchored in the IT Act 2000, Digital Personal Data Protection Act 2023, and Bharatiya Nyaya Sanhita 2023. Key measures included rapid takedown of unlawful content within 72 hours, AI-driven detection tools like MuleHunTER.AI, and nationwide awareness campaigns such as Cyber Jagrookta Diwas.

The takeaway: technology is critical, but vigilance, training, and a culture of healthy skepticism are just as important.

What Organisations Must Do Now

1️⃣Move beyond single-mode authentication 

Voice or facial recognition alone is no longer safe—AI can mimic both with eerie precision.

💡A multinational bank added OTPs and hardware token approvals on top of voiceprint verification, blocking attacks using cloned voices.

2️⃣Adopt Multi-Modal, Multi-Factor Verification 

Blend biometrics, device IDs, behavioral analytics, and app confirmations—especially for high-value transactions.

💡A financial services firm combined fingerprint recognition, device fingerprinting, and typing patterns with app-based approvals, drastically reducing impersonation risk.

3️⃣Train Teams for AI-Enabled Attacks

Technology alone can’t defend against scams—employees must learn to spot the red flags.

💡A global corporation ran quarterly simulations where “executives” demanded urgent actions via deepfake calls. Teams practiced escalation instead of reaction.

4️⃣Treat Biometric Data Like Gold

Limit, encrypt, and regularly audit where voice and facial data is stored, complying with privacy laws like GDPR to avoid legal or reputational fallout.

💡An insurance company audits its biometric repositories quarterly, deletes unnecessary data, and updates encryption protocols while training staff on lawful handling.

5️⃣Rehearse the Worst-Case Scenario

Don’t wait for an actual attack to test your defenses.

💡A major bank staged a fake CEO video call ordering an urgent transfer. Teams ran protocols, stress-tested coordination, and refined playbooks before the real crisis hit.

Final Thoughts

The rise of deepfake fraud isn’t just a passing headline—it’s a structural shift in how trust can be manipulated in business. For CFOs and leadership teams in India, the question isn’t if these scams will knock on the door, but when.

Traditional fraud controls alone are no longer enough. Organisations need layered defenses: smarter technology, vigilant processes, and above all, a culture where people pause, question, and verify before acting.

In the age of AI, trust isn’t automatic — it has to be defended. The organisations that understand this won’t just survive. They’ll endure.

Suggested Reading

  1. The sudoku of fraud detection: How banks can use AI to spot AI-generated deepfakes 
  2. Strategies to Combat Deepfake Fraud and Synthetic Identity Threats in Financial Services | nasscom | The Official Community of Indian IT Industry 
  3. Five Ways to Protect Your Business From Deepfake Scams
  4. Deepfake Statistics CFOs Need to Know in 2025
  5. Deepfake: The New Face of Financial Fraud in India