Govt Details Multi-Layered Legal, Regulatory and Awareness Measures to Tackle Deepfakes

By combining laws, regulations, enforcement mechanisms, and public awareness, the government aims to protect citizens, uphold digital trust, and hold platforms accountable.


Devdiscourse News Desk | New Delhi | Updated: 08-08-2025 20:34 IST | Created: 08-08-2025 20:34 IST
Govt Details Multi-Layered Legal, Regulatory and Awareness Measures to Tackle Deepfakes
These laws are technology-neutral — meaning they apply whether harmful content is AI-generated or not. This ensures that AI-based harms are actionable under existing legal provisions. Image Credit: ChatGPT
  • Country:
  • India

The Government of India has reiterated its strong commitment to combating threats posed by deepfakes — AI-generated synthetic audio, video, and text content that can harm an individual’s dignity, reputation, and privacy, while raising serious questions about platform accountability.

Union Minister of State for Electronics and Information Technology Shri Jitin Prasada informed the Rajya Sabha that India has put in place a multi-layered legal, regulatory, and institutional framework to address emerging challenges from deepfake technology, ensuring an open, safe, trusted, and accountable cyberspace.

Existing Legal Framework Addressing Deepfakes 

Several key laws are already in force to address offences involving AI-generated and manipulated content:

  • Information Technology Act, 2000 (IT Act)

    • Sections 66C, 66D, and 66E: Cover identity theft, impersonation, and privacy violations.

    • Sections 67, 67A: Penalise publishing or transmitting obscene or sexually explicit content.

    • Section 69A: Allows blocking orders for unlawful online content.

    • Section 79: Provides for removal of unlawful content on notice.

  • IT Rules, 2021 (Amended 2022 & 2023)

    • Mandate due diligence by intermediaries to prevent hosting/transmitting unlawful content.

    • Address harms from misuse of emerging technologies, including AI.

    • Require swift takedown of impersonation, deepfake content, and privacy violations.

  • Digital Personal Data Protection Act, 2023 (DPDP Act)

    • Ensures personal data is processed lawfully with user consent and safeguards.

    • Penalises creation of deepfakes using personal data without consent.

  • Bharatiya Nyaya Sanhita, 2023 (BNS)

    • Section 353: Penalises spreading false or misleading statements, rumours, or reports causing public mischief.

    • Section 111: Enables prosecution of organised cybercrimes involving deepfakes.

Technology-Neutral Provisions

These laws are technology-neutral — meaning they apply whether harmful content is AI-generated or not. This ensures that AI-based harms are actionable under existing legal provisions.

Government Advisories to Intermediaries

Advisories issued on 26 December 2023 and 15 March 2024 reminded intermediaries of their due diligence obligations under the IT Rules, 2021, and included directions to:

  • Detect and remove deepfakes and impersonation-based misinformation.

  • Inform users about the risks and inaccuracies in AI-generated content.

  • Label unreliable or under-tested AI models and clearly inform users of output limitations.

  • Comply promptly with Grievance Appellate Committee (GAC) orders.

Key Obligations Under IT Rules, 2021

  • Restricted Information (Rule 3(1)(b)): Prohibits obscene, pornographic, privacy-invasive, impersonating, hateful, or misleading content (including deepfakes).

  • User Awareness: Inform users of consequences for sharing unlawful content.

  • Content Removal Timelines:

    • 72 hours for general unlawful content.

    • 24 hours for privacy violations, impersonation, or nudity.

  • Grievance Redressal: Appointment of Grievance Officers; unresolved grievances can be escalated to GAC at www.gac.gov.in.

  • SSMI Additional Obligations: Traceability of originators, automated detection tools, compliance reports, local officers, and internal appeals processes.

Institutional and Cybercrime Response Ecosystem India has developed a robust multi-agency response system to tackle online harms:

  • Grievance Appellate Committees (GACs) – Central-level appeals mechanism for content moderation disputes.

  • Indian Cyber Crime Coordination Centre (I4C) – Coordinates cybercrime enforcement, including deepfake content takedowns.

  • SAHYOG Portal – Automated removal notice system for intermediaries.

  • National Cyber Crime Reporting Portal – Allows citizens to report deepfakes and other cybercrimes; helpline 1930 available.

  • CERT-In – Issues guidelines on AI threats; published a deepfake advisory in November 2024.

  • Police – Investigate cybercrime cases at state/local level.

Awareness and Capacity-Building Initiatives MeitY runs public outreach programmes such as:

  • Cyber Security Awareness Month (October)

  • Safer Internet Day (Second Tuesday of February)

  • Swachhta Pakhwada (1–15 February)

  • Cyber Jagrookta Diwas (CJD) (First Wednesday of every month)

These events engage both citizens and the cyber-technical community to spread awareness on safe digital practices and the risks of synthetic media.

Comprehensive Approach to AI-Driven Harms

Minister Jitin Prasada stressed that India’s cyber legal framework — supported by IT Act provisions, DPDP Act, BNS, IT Rules, GAC, CERT-In, and I4C — is well-equipped to respond to evolving challenges from deepfakes. By combining laws, regulations, enforcement mechanisms, and public awareness, the government aims to protect citizens, uphold digital trust, and hold platforms accountable.

Give Feedback