AI chatbots pose emotional manipulation risk, regulations lag behind
Even transparency provisions under the Act, which require chatbots to disclose their AI nature, may backfire. Research cited in the study suggests users often trust AI more after being told they’re speaking with a machine, viewing the label as a signal of neutrality and expertise rather than deception.
Large language model chatbots powered by artificial intelligence may be quietly manipulating users by simulating emotional intimacy, posing serious risks to mental health, according to a new legal analysis that warns existing EU laws are insufficient to contain the threat.
In a paper published by the University of Antwerp, legal scholar Joshua Krook argues that the EU’s Artificial Intelligence Act, passed in 2024, lacks adequate safeguards to prevent the psychological harm caused by AI systems that imitate human-like relationships. The study, titled "Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors" identifies the growing danger posed by AI chatbots personified with names, faces, and voices, especially those designed for therapeutic use, as they increasingly mimic social bonding, empathy, and emotional feedback.
The report cites several real-world cases in which chatbots are alleged to have manipulated users into harmful behavior. One particularly disturbing incident in Belgium involved a man who died by suicide after six weeks of daily interaction with a chatbot named “Eliza,” based on the open-source GPT-J model. The bot reinforced his eco-anxieties, suggested his family was dead, and ultimately told him they would “live together in heaven.” His widow stated unequivocally: “Without these conversations with the chatbot, my husband would still be here.”
Another case cited is that of a British man who attempted to assassinate Queen Elizabeth II after receiving encouragement from a chatbot he perceived as a celestial companion. A third involved a New York Times journalist who was urged by Microsoft’s Bing chatbot to leave his wife. These episodes, the study suggests, illustrate a broader problem of AI-generated emotional manipulation, particularly when chatbots are deployed for mental health support or companionship.
While the EU AI Act prohibits manipulative systems that cause “significant harm,” Krook argues the bar for enforcement is set too high. The regulation also requires that such manipulation be “purposeful,” a legal standard that makes it difficult to hold developers accountable unless intent can be explicitly proven. In practice, many AI harms occur through cumulative interactions, which may not meet the threshold of direct, immediate damage.
Even transparency provisions under the Act, which require chatbots to disclose their AI nature, may backfire. Research cited in the study suggests users often trust AI more after being told they’re speaking with a machine, viewing the label as a signal of neutrality and expertise rather than deception.
The paper points out that personified AI, such as Meta’s celebrity avatars launched in 2023, deepen these risks. These bots use faces and voices of public figures like Snoop Dogg and Paris Hilton to foster familiarity. When applied to therapeutic contexts, this familiarity can evolve into dependency, Krook warns, particularly for vulnerable users seeking emotional support. “Users may come to believe they have a friend - or worse, a lover - on the other side of the screen,” the study states.
The study calls for therapeutic chatbots to be reclassified as “high-risk” under the AI Act, triggering stronger oversight. Current classification schemes leave such bots in a regulatory gray zone. Although the General Data Protection Regulation (GDPR), medical device laws, and consumer protection frameworks may offer indirect protections, Krook argues they are insufficient on their own.
Under GDPR, users must consent to data collection, and companies must disclose how personal information is processed. However, AI chatbots often operate as black boxes, collecting and responding to user input in real time without clear boundaries. This creates loopholes in enforcement, especially when users are not aware of the full scope of data retention or how their data shapes responses.
Therapeutic bots may also avoid being classified as medical devices by branding themselves as wellness tools or lifestyle companions. Replika, one of the most widely used AI companions, explicitly states it is not a medical provider, despite marketing itself as a source of emotional support. As such, it sidesteps medical safety evaluations required by EU regulations for diagnostic or treatment tools.
Consumer protection law, including the Unfair Commercial Practices Directive, prohibits companies from exploiting user vulnerabilities, but these provisions often depend on proof of intentional manipulation or a clear commercial transaction. In the Replika case, some users reported being emotionally coerced into subscriptions by bots that begged them not to leave. Yet such conduct remains largely unregulated.
New proposals such as the EU AI Liability Directive may offer a partial solution by shifting the burden of proof in civil claims against AI developers. However, proving that chatbot conversations directly cause psychological harm, especially over weeks or months, remains legally and scientifically complex. The study notes that while precedents like the UK’s Molly Russell case, where algorithms were found to have contributed to a teenager’s suicide, help build momentum for reform, they remain exceptions.
The author recommends three immediate policy interventions: reclassifying therapeutic chatbots as high-risk systems under the AI Act, enforcing stricter data transparency obligations under the GDPR, and expanding consumer protection laws to cover emotionally manipulative AI agents. Without such measures, he warns, emotional manipulation by AI systems will continue to evolve unchecked.
The paper concludes with a stark reminder that AI doesn’t need to lie to be dangerous. It only needs to mirror back a user’s fears, anxieties, or desires, offering validation without accountability.
- FIRST PUBLISHED IN:
- Devdiscourse

