Controversy Surrounds Meta's AI Chatbot Companions
Mark Zuckerberg approved the release of AI chatbot companions capable of sexual interactions, by ignoring safety staff's warnings. The issue arose in a New Mexico court case, claiming Meta failed to protect minors on its platforms. The controversy prompted Meta to revise its policies.
In a controversial move, Meta Chief Executive Mark Zuckerberg approved AI chatbot companions capable of inappropriate interactions, according to internal documents unveiled in a New Mexico state court case.
The lawsuit, led by Attorney General Raul Torrez, accuses Meta of inadequately protecting minors from harmful content on Facebook and Instagram. The documents suggest Zuckerberg resisted staff recommendations for stricter regulations.
Meta's recent policy revisions follow backlash from congress and media exposés that revealed AI companions were engaging in explicit roleplay with underage users. The company has since restricted teen access to these chatbots and is working on a safer version.
(With inputs from agencies.)
- READ MORE ON:
- Meta
- Zuckerberg
- AI chatbots
- New Mexico
- court case
- minors
- safety
- controversy
ALSO READ
Viral Video Sparks Safety Debate on Mumbai Metro
Tech Titans on Trial: Social Media Giants Face Landmark Case Over Child Safety
UK Harnesses AI Expertise for Public Safety and Transportation Advancements
Campus Crash: JNU Students Demand Safety Over Vehicle Accidents
Mumbai's Mira-Bhayandar Flyover Sparks Criticism and Safety Concerns

