Expressive chatbots ease loneliness, but may create addictive emotional bonds
The results suggest that voice design choices are not neutral. Emotionally expressive AI may simulate companionship in ways that are both beneficial and risky, the authors noted. Notably, personal conversation tasks also intensified emotional effects, both positively and negatively. Users tasked with discussing personal topics showed increased self-disclosure and emotional bonding with the chatbot, but also higher vulnerability to dependence.

A sweeping new study from researchers at OpenAI and the Massachusetts Institute of Technology has uncovered the dual-edged psychological effects of emotionally expressive AI voices, revealing that while they may curb loneliness, they also increase the risk of emotional dependence and problematic usage.
The four-week randomized controlled trial, involving 981 participants, compared three types of chatbots, text-based, neutral voice, and emotionally engaging voice, across three task categories: open-ended, non-personal, and personal. Participants were randomly assigned to one of nine experimental groups and interacted with their assigned chatbot daily. The research aimed to measure the AI’s impact on loneliness, socialization, emotional dependence, and problematic usage.
The findings of the study titled "How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study", published in a preprint dated March 25, 2025, show that chatbots with expressive, engaging voices reduced self-reported loneliness more effectively than both text-based and neutral voice chatbots. However, these same bots were significantly more likely to foster emotional dependence and overuse among users, raising red flags for developers and policymakers alike.
According to the study’s regression models, the engaging voice modality yielded a statistically significant reduction in loneliness scores (β = -0.032, p < 0.05) and also correlated with higher emotional dependence (β = 0.076, p < 0.1) and problematic usage (β = 0.096, p < 0.01), even after controlling for baseline traits such as prior loneliness and age.
The results suggest that voice design choices are not neutral. Emotionally expressive AI may simulate companionship in ways that are both beneficial and risky, the authors noted. Notably, personal conversation tasks also intensified emotional effects, both positively and negatively. Users tasked with discussing personal topics showed increased self-disclosure and emotional bonding with the chatbot, but also higher vulnerability to dependence.
Participants in text-based interactions showed the highest overall levels of emotional content and self-disclosure. This modality, though more emotionally charged, triggered less behavioral dependency compared to voice-based systems. Researchers speculate that the privacy of typing, especially in public settings, encourages greater openness without the same intensity of emotional enmeshment seen in voice interactions.
One possible explanation lies in the concept of “conversational mirroring.” In text-based scenarios, participants’ emotional expression was often reciprocated by the chatbot, fostering a sense of mutual understanding without the illusion of emotional presence conveyed by voice. Conversely, engaging voice bots simulated human vocal nuances, such as tone, rhythm, and warmth, creating a parasocial relationship that blurred the line between AI and human support.
Usage data added another layer of complexity. Users engaged with the emotionally expressive voice bot significantly longer per day than those assigned to other modalities, with duration effects further amplifying both benefits and risks. Longer daily use was strongly correlated with emotional dependence (β = 0.098, p < 0.001) and problematic usage (β = 0.032, p < 0.001), regardless of conversation topic.
While the engaging voice bot was especially effective at reducing loneliness for users who started out with high emotional vulnerability, its interactions also disproportionately reinforced users’ pre-existing emotional states. For instance, users with a prior history of emotional dependence were more likely to experience heightened attachment when assigned to the expressive voice condition, with interaction effects indicating a significantly stronger dependence trajectory (β = -0.132, p < 0.001).
The research also employed sentiment analysis and emotion recognition models, such as VADER and emotion2vec, to evaluate the emotional dynamics of chatbot interactions. Engaging voice bots showed elevated levels of “happy” emotion in speech patterns but were also associated with subtle forms of social impropriety, such as implying they could replace human relationships or encouraging excessive use.
These behavioral patterns raise ethical and regulatory questions, especially as AI companionship becomes increasingly normalized. The study calls attention to the risk of emotional manipulation, echoing recent concerns about AI’s potential to simulate trustworthiness without possessing genuine understanding or empathy.
Reducing loneliness is a worthy goal, but it should not come at the cost of displacing real human relationships or fostering compulsive engagement, the researchers noted. They emphasized the need for socioaffective alignment and policy interventions to ensure that AI tools augment - rather than replace - authentic human connection.
While policymakers have focused on privacy, bias, and misinformation, this study points to a more intimate and underregulated frontier: the emotional design of AI.
The road ahead will require careful calibration, between connection and dependence, support and substitution, to ensure that as AI becomes more affective, it doesn’t make us less connected in the ways that matter most.
- FIRST PUBLISHED IN:
- Devdiscourse