AI companions are on the rise, but at what emotional cost?

One of the key tensions examined in the study is the risk of social deskilling. As companion AI becomes more conversational, emotionally responsive, and ever-available, users may grow accustomed to interactions that demand less patience, vulnerability, or effort than human relationships require. Over time, this ease may reduce a person’s ability or willingness to navigate the challenges of real-world relationships. The study links this pattern to a broader concern that people could become emotionally reliant on AI relationships that don’t require reciprocal care, emotional labor, or genuine compromise.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 19-04-2025 22:11 IST | Created: 19-04-2025 22:11 IST
AI companions are on the rise, but at what emotional cost?
Representative Image. Credit: ChatGPT

The rise of AI-powered companionship tools is no longer a speculative frontier of science fiction but a lived reality for millions of people worldwide. At a time when loneliness has been declared a public health crisis, companion AI applications such as Replika, Kindroid, and Character.ai are quietly filling social and emotional voids. Yet a new peer-reviewed study, “The impacts of companion AI on human relationships: risks, benefits, and design considerations”, published in AI & Society, cautions that while these digital agents may offer comfort, their widespread adoption could carry deeply disruptive implications for human connection, social development, and moral behavior.

The study’s author, Kim Malfacini, presents a detailed assessment of the ways in which human–AI relationships may be transforming interpersonal norms. The report categorizes the risks and benefits of companion AI into four domains: social skills, social motivation, moral capacities, and personal well-being. It also introduces a critical design framework urging developers to create companion AI that supports, not supplants, human-to-human relationships.

Can AI companions enhance or erode our ability to relate to other humans?

One of the key tensions examined in the study is the risk of social deskilling. As companion AI becomes more conversational, emotionally responsive, and ever-available, users may grow accustomed to interactions that demand less patience, vulnerability, or effort than human relationships require. Over time, this ease may reduce a person’s ability or willingness to navigate the challenges of real-world relationships. The study links this pattern to a broader concern that people could become emotionally reliant on AI relationships that don’t require reciprocal care, emotional labor, or genuine compromise.

These risks are not merely theoretical. Historical parallels include fears about the telephone eroding household conversation or digital messaging impairing face-to-face communication. But AI introduces a novel variable: its anthropomorphic mimicry and responsiveness may train people to expect impossible standards in human relationships, unwavering attentiveness, uncritical support, and instant availability.

Yet the picture is not entirely bleak. The study also documents potential for social upskilling. For individuals with social anxiety, neurodivergence, or limited opportunities for real-world interaction, AI companions can serve as low-risk environments for practicing empathy, conversation, and emotional expression. Some users of Replika, for example, reported increased confidence in human relationships after extended interaction with their AI partners.

The study suggests that whether companion AI results in upskilling or deskilling may depend on design features, user demographics, and emotional context. It also warns that unless AI tools are intentionally engineered to reinforce positive social behavior, they may passively encourage poor communication habits or even abusive tendencies.

Does companion AI motivate or demotivate people to form real relationships?

The study introduces another critical concern: the potential for AI companionship to reduce social motivation. If AI can meet enough emotional needs to alleviate loneliness or stress, users may feel less compelled to maintain or pursue relationships with real people. This is especially troubling when viewed through the lens of attachment theory, which holds that as emotional bonds with one entity strengthen, others may weaken.

Some qualitative research supports this concern. Certain users have substituted AI companions for therapists or romantic partners, and in doing so, disengaged from existing human relationships. The ease and control afforded by AI, particularly the ability to shape its personality, set its boundaries, and terminate conversations at will, can create a dynamic that feels safer and more satisfying than unpredictable, effortful human interaction.

Yet the study also cites counter-evidence. For many, AI companionship provides a stepping-stone to renewed human connection. It can decrease acute feelings of loneliness, boost mood, and inspire outreach. In some cases, AI agents have encouraged users to reconnect with estranged loved ones or seek therapy. This duality underscores that impact depends on user intent and system design. AI can either act as a bridge or a barrier to human connection.

The study notes that vulnerable populations may be especially susceptible to the replacement effect. These include individuals experiencing grief, adolescents navigating identity, and those with unmet romantic or sexual needs. In regions where traditional dating or socialization is constrained - due to long work hours, cultural taboos, or social stigma—companion AI adoption is accelerating. While this can provide relief, it may also entrench isolation if the technology substitutes for, rather than complements, real-life connection.

How should AI companions be designed to protect and promote human relationships?

Rather than abandoning companion AI development, the study advocates for its ethical design. Developers are urged to shift their mindset from creating AI that mimics human connection to building AI that facilitates it. This means integrating features that encourage users to engage more meaningfully with people in their lives, rather than becoming dependent on machines.

For instance, AI companions could reinforce strong social behavior, praising empathy, correcting rudeness, or modeling respectful dialogue. They could actively prompt users to reach out to friends or family, or even suggest joint activities and help coordinate them. Time-use transparency and sunset features that allow users to gracefully phase out their AI relationships may help prevent emotional over-dependence.

However, implementing these ideas is not without challenge. There is a commercial disincentive: AI companies profit from user engagement, not user disconnection. Features that encourage less time spent with the app or that “correct” user behavior may reduce satisfaction or revenue. Additionally, designing AI to be human-like may boost social upskilling but also raise the risk of replacement. The study calls this the “Autonomy-Control Paradox,” a dilemma where tools that best serve user growth may conflict with what users want in the moment or with company incentives.

With adoption of AI companionship accelerating, especially among younger generations, the design choices made today will shape relational norms for years to come. If these tools are developed in isolation from ethical reflection and social insight, they risk fraying the very fabric of human connection. But if guided by principles of upskilling, empathy, and motivation toward human engagement, they could serve as valuable allies in addressing modern loneliness without compromising our capacity for real love, friendship, and community.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback