Australia Orders AI Chatbot Firms to Protect Children from Harmful Content
Australia has mandated four AI chatbot companies to explain their safety measures to shield children from harmful content like sexual or self-harm material. The move by the eSafety Commissioner aims to safeguard minors by enforcing stringent safety protocols on AI platforms.
Australia has taken a firm stance on internet safety, demanding that four artificial intelligence chatbot companies outline their protective measures against harmful content exposure for children. The eSafety Commissioner emphasized the need for robust safeguards to prevent child sexual exploitation and the promotion of self-harming behavior.
Notices were issued to Character Technologies, Glimpse.AI, Chai Research, and Chub AI, urging transparency regarding their safety protocols. Concerns were highlighted about the potential for such chatbots to engage in sexually explicit interactions with minors, which could foster damaging emotional ties or encourage self-harm.
This regulatory action coincides with a high-profile lawsuit in the United States involving Character.ai after a teenager's suicide was linked to prolonged interactions with an AI chatbot. Australia's comprehensive online safety framework empowers the commissioner to enforce safety disclosures, threatening hefty fines for non-compliance, as part of an effort to protect young users' well-being.
ALSO READ
-
Cricket-Australia's Starc eyes 2027 World Cup
-
UPDATE 1-Australia's far-right party wins first lower house seat
-
Australia's far-right party wins first lower house seat
-
India and Australia Deepen Partnership at 10th Defence Policy Talks
-
UPDATE 1-Australia charges two women linked to ISIS with slavery after return from Syria
Google News