India's AI Labeling Proposal: Pioneering Misinformation Control
India proposes a mandate for AI and social media companies to clearly label AI-generated content, aimed at curbing deepfake and misinformation. The proposal requires labeling 10% of visual or audio content, emphasizing transparency. A response from industry stakeholders is sought by November 6.
India's government has taken a significant step to combat the growing threat of deepfakes and misinformation by proposing a policy requiring artificial intelligence and social media companies to label AI-generated content. This move follows similar initiatives by the European Union and China.
With close to 1 billion internet users, India faces a substantial risk of fake news fueling ethnic and religious tensions. The proposed rules mandate that AI-generated content must be visibly marked, covering at least 10% of a visual display's surface or the first 10% of an audio clip's duration. Leading AI firms like OpenAI, Meta, and Google will be responsible for implementing these rules.
The Indian government's draft proposal, which invites public and industry feedback by November 6, highlights the growing misuse of generative AI tools. This includes misinformation, election manipulation, and identity impersonation. Notably, Indian courts are handling high-profile deepfake lawsuits involving Bollywood stars and AI-generated media.
(With inputs from agencies.)
- READ MORE ON:
- AI
- India
- misinformation
- deepfakes
- labeling
- OpenAI
- Meta
- regulation
- social media
ALSO READ
Meta on Trial: Unveiling the Hidden Impacts of Social Media on Youth
Union Showdown: IG Metall's Battle at Tesla's Berlin Gigafactory
String Metaverse Launches Global AI Initiative with Strategic Leadership Moves
OpenAI Secures Groundbreaking Pentagon Deal with Unprecedented Safeguards
OpenAI and Amazon Forge $150 Billion AI Powerhouse Partnership

