Smart IoT devices for kids flout EU transparency and data protection rules

According to the law, AI systems categorized as “high-risk” must disclose their AI functionality to users, ensure clear consent mechanisms, and provide detailed data protection safeguards. The GDPR complements this framework by requiring parental consent for minors’ data collection, accessible privacy information, and strong security controls for sensitive personal data such as biometrics or emotional analytics.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-10-2025 09:43 IST | Created: 23-10-2025 09:43 IST
Smart IoT devices for kids flout EU transparency and data protection rules
Representative Image. Credit: ChatGPT

A new European study has raised alarms over widespread legal and ethical violations among AI-powered devices designed for children, revealing serious non-compliance with the EU’s Artificial Intelligence (AI) Act and General Data Protection Regulation (GDPR). The research exposes how major child-facing smart toys and home robots fail to disclose their AI use or protect minors’ personal data.

Published in Telecom, the study titled “Assessing Compliance in Child-Facing High-Risk AI IoT Devices: Legal Obligations Under the EU’s AI Act and GDPR" provides one of the first in-depth analyses of AI-integrated Internet of Things (IoT) devices targeting children since the EU AI Act entered into force in August 2024. The findings point to an alarming gap between legislative intent and corporate practice, underscoring the urgent need for enforcement mechanisms to safeguard minors’ digital rights in the AI era.

High-risk AI and the vulnerability of child users

The study examines the compliance of three popular consumer devices currently marketed in Spain: Loona, a smart pet robot; RUX AI Desktop, a conversational assistant for children; and Enabot Ebo X, a family companion robot equipped with emotion recognition and voice interaction. Each device is powered by large language models or generative AI tools such as ChatGPT and GPT-4o mini, making them subject to the highest transparency and safety requirements under the EU AI Act.

According to the law, AI systems categorized as “high-risk” must disclose their AI functionality to users, ensure clear consent mechanisms, and provide detailed data protection safeguards. The GDPR complements this framework by requiring parental consent for minors’ data collection, accessible privacy information, and strong security controls for sensitive personal data such as biometrics or emotional analytics.

However, The study’s audit of these devices revealed that none of the three products complied with these obligations. While marketed as educational or entertaining companions, their privacy policies omitted any mention of artificial intelligence. All three failed to inform users that the devices contained interactive AI components, a direct violation of Article 50.1 of the AI Act, which mandates disclosure whenever a user engages with an AI system.

The researchers also found that policies were available only in English, even though the products were sold in Spain, contravening GDPR requirements that personal data disclosures be easily understandable and presented in local languages. Moreover, none of the companies provided age-specific safeguards or parental verification processes, exposing minors to unregulated data collection.

Compliance gaps across transparency, consent, and data control

The study identified deeper systemic flaws across the privacy frameworks of all three devices. The authors conducted keyword and content analysis of privacy policies and app interfaces to determine whether providers acknowledged AI integrations, explained data flows, or referenced compliance with European regulations.

The results were troubling. Not a single privacy policy explicitly mentioned the use of generative AI or large language models. While Loona and RUX AI claimed that personal data was processed locally, both devices relied on cloud-based services such as Amazon Web Services and ChatGPT APIs, suggesting potential cross-border data transfers outside the European Economic Area. In the case of Enabot Ebo X, the inclusion of GPT-4o mini and Alexa voice services created overlapping AI functionalities without clear disclosure of which systems processed user input.

The study further observed that privacy terms lacked meaningful consent structures. Children, as the intended users, were not provided with simplified explanations of data use, nor were parents given verifiable tools to control or revoke consent. This neglect violates GDPR Recital 38, which mandates heightened protection for minors who cannot fully comprehend digital risks.

The study concludes that these devices operate in what they call a “legal grey zone,” where the marketing of smart AI products to families outpaces regulatory enforcement. The problem is compounded by inconsistent interpretations of AI transparency obligations across EU member states, leaving consumers reliant on company goodwill rather than legal compliance.

AI accountability and the urgent need for enforcement

The authors warn that the normalization of non-compliance in child-facing technology sets a dangerous precedent for future AI markets. Despite the EU AI Act’s entry into force, the paper finds “limited industry adaptation” to its core principles of transparency, explainability, and safety.

This lack of adaptation is especially troubling given the psychological and developmental sensitivity of children interacting with AI. Devices like Loona and Ebo X are designed to build emotional relationships through conversation and companionship. In doing so, they can elicit trust and disclosure from children, behaviors that heighten exposure to personal data harvesting and potential emotional manipulation.

The study argue that the ethical stakes go beyond legal compliance. The emotional dependency formed between children and AI companions raises new questions about cognitive privacy, digital autonomy, and informed consent. They caution that the failure to implement strong protections could allow AI systems to shape children’s behavior and worldview through opaque, data-driven interactions.

To counter these risks, the authors urge regulators to move from policy drafting to active enforcement. They propose that Data Protection Authorities (DPAs) collaborate with AI supervisory bodies across the EU to conduct coordinated audits of high-risk consumer AI systems. The study calls for mandatory inclusion of AI transparency sections in all product documentation and privacy notices, detailing integrated models, their decision-making logic, and data retention practices.

Other recommendations include:

  • Developing standardized disclosure templates for AI-enabled devices to ensure consistency across EU markets.
  • Requiring independent conformity assessments for products classified under “high-risk” or “systemic risk” categories.
  • Implementing child-friendly privacy communication, including interactive consent prompts and localized materials.
  • Establishing a central EU registry of certified compliant AI systems accessible to consumers.

Protecting the youngest users in the AI age

The study builds on previous research, including Feldbusch et al. (2024) on smart toy privacy breaches and McStay and Rosner (2021) on emotional AI, to argue that child-focused AI technologies remain dangerously under-regulated.

By exposing the disconnect between EU law and market behavior, The study make clear that current safeguards are not enough. While the AI Act represents a global benchmark for responsible innovation, it has yet to translate into tangible compliance among consumer AI providers. The researchers emphasize that the protection of children must be proactive, not reactive, requiring continuous oversight, transparency audits, and stronger penalties for violations.

The failure of companies to disclose AI use and secure minors’ data represents not just a legal issue, but a moral one. Protecting children’s digital rights is, the authors argue, essential to maintaining public trust in artificial intelligence. Without transparency and accountability, the promise of AI-enhanced education and entertainment risks giving way to surveillance, manipulation, and data exploitation.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback