India's AI Labeling Proposal: Pioneering Misinformation Control
India proposes a mandate for AI and social media companies to clearly label AI-generated content, aimed at curbing deepfake and misinformation. The proposal requires labeling 10% of visual or audio content, emphasizing transparency. A response from industry stakeholders is sought by November 6.
India's government has taken a significant step to combat the growing threat of deepfakes and misinformation by proposing a policy requiring artificial intelligence and social media companies to label AI-generated content. This move follows similar initiatives by the European Union and China.
With close to 1 billion internet users, India faces a substantial risk of fake news fueling ethnic and religious tensions. The proposed rules mandate that AI-generated content must be visibly marked, covering at least 10% of a visual display's surface or the first 10% of an audio clip's duration. Leading AI firms like OpenAI, Meta, and Google will be responsible for implementing these rules.
The Indian government's draft proposal, which invites public and industry feedback by November 6, highlights the growing misuse of generative AI tools. This includes misinformation, election manipulation, and identity impersonation. Notably, Indian courts are handling high-profile deepfake lawsuits involving Bollywood stars and AI-generated media.
(With inputs from agencies.)
- READ MORE ON:
- AI
- India
- misinformation
- deepfakes
- labeling
- OpenAI
- Meta
- regulation
- social media
ALSO READ
Controversial Sponsorship: Borussia Dortmund's Pact with Rheinmetall
UK Stock Indexes Surge as Precious Metals Shine Amid Rate Cut Speculations
Market Movements: Metals, Currencies, and Global Indices
Meta India Appoints Aman Jain as New Head of Public Policy
Aman Jain Appointed as Meta India's New Public Policy Head

