India's AI Labeling Proposal: Pioneering Misinformation Control

India proposes a mandate for AI and social media companies to clearly label AI-generated content, aimed at curbing deepfake and misinformation. The proposal requires labeling 10% of visual or audio content, emphasizing transparency. A response from industry stakeholders is sought by November 6.


Devdiscourse News Desk | Updated: 22-10-2025 15:48 IST | Created: 22-10-2025 15:48 IST
India's AI Labeling Proposal: Pioneering Misinformation Control
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.

India's government has taken a significant step to combat the growing threat of deepfakes and misinformation by proposing a policy requiring artificial intelligence and social media companies to label AI-generated content. This move follows similar initiatives by the European Union and China.

With close to 1 billion internet users, India faces a substantial risk of fake news fueling ethnic and religious tensions. The proposed rules mandate that AI-generated content must be visibly marked, covering at least 10% of a visual display's surface or the first 10% of an audio clip's duration. Leading AI firms like OpenAI, Meta, and Google will be responsible for implementing these rules.

The Indian government's draft proposal, which invites public and industry feedback by November 6, highlights the growing misuse of generative AI tools. This includes misinformation, election manipulation, and identity impersonation. Notably, Indian courts are handling high-profile deepfake lawsuits involving Bollywood stars and AI-generated media.

(With inputs from agencies.)

Give Feedback