India's IT Rules Tackle Deepfakes: Mandatory Labelling for AI Content
India proposes amendments to IT rules, requiring explicit labelling of AI-generated content to curb deepfake and misinformation threats. The proposed rules hold major social media platforms accountable for verifying synthetic information. Comments on the draft amendment will be collected by November 6, 2025.
- Country:
- India
The Indian government has proposed amendments to the IT rules, requiring the distinct labelling of AI-generated content to counter the threats posed by deepfakes and misinformation. These changes are aimed at increasing the accountability of major platforms such as Facebook and YouTube, amid growing concerns over synthetic media's impact on society.
According to the IT Ministry, the prevalence of deepfake audio, videos, and synthetic media highlights the potential misuse of generative AI to create misleading content. Such media can be weaponized to spread misinformation, damage reputations, and manipulate elections, prompting the government to mandate clear identification and traceability of synthetically generated content.
The proposed amendments also call for social media platforms to embed metadata in modified content and enforce strict compliance measures to maintain the integrity of labelled information. Failure to adhere to these rules could result in the loss of safe harbour protections, emphasizing the importance of transparency in distinguishing synthetic from authentic media.
(With inputs from agencies.)
ALSO READ
GST 2.0 Propels India's Growth Amid Global Trade Challenges
BJP Accuses Rahul Gandhi of Defaming India During Germany Visit
Kothapet Protests: Vishva Hindu Parishad demands firm Indian response to minority attacks in Bangladesh
Mercedes-Benz India's Quarterly Price Strategy Against Rupee Decline
Zoomcar's Roadmap to Closing India's EV Experience Gap

