The quiet rise of AI as an emotional lifeline for students
Young adults may publicly describe generative artificial intelligence, genAI, as a practical tool while privately using large language models (LLMs) for emotional advice, validation and, in some cases, companionship, according to a new study published in AI & Society.
The study, titled "Symmetries and asymmetries between attitudes and interaction in relation to the emotional uses of LLMs," examines how Mexican university students aged 18 to 24 think about and interact with large language models, using 285 survey responses, a mixed questionnaire and 35 semi-structured interviews to compare declared attitudes with actual interaction patterns.
Attitudes toward emotional AI use remain cautious, but behavior tells a more complex story
The research focuses on a sharp mismatch that could shape future discussions on AI, youth well-being and digital trust. Students in the study did not generally report strong emotional attachment to generative AI. On a five-point scale measuring attitudes toward emotional uses of AI, the average score was 2.47, indicating low-to-moderate acceptance and a cautious or mildly conservative stance toward using AI for emotional purposes.
However, the interviews and open-ended survey responses showed that emotional engagement was already present. Students who framed AI as a school, work or information tool also described turning to it for personal advice, stress-related support, emotional clarification and decision reassurance. The authors argue that this gap between what users say and what they do is not accidental. It reflects what they call a symmetry-asymmetry model: declared attitudes sometimes match instrumental behavior, but they often fail to capture the full range of emotional interactions taking place in private.
The clearest symmetry appeared when students treated AI as a tool. Many participants said they used large language models to clarify concepts, complete assignments, organize tasks, automate work or obtain quick information. In these cases, low emotional attitudes aligned with task-focused use. The interaction remained practical, controlled and distant.
The asymmetry emerged when those same or similar users shifted into emotional exchanges. The study found that students could maintain a public or self-declared view of AI as merely functional while still relying on it in moments of emotional need. This suggests that conventional surveys on attitudes toward AI may miss important patterns if they do not also examine actual interaction repertoires.
The study's methodological design was built around that concern. The authors used the EAUE-GenAI scale, a 10-item instrument developed to measure attitudes toward emotional uses of generative AI. The scale showed strong internal reliability, with a Cronbach's alpha of 0.90. The researchers initially expected two dimensions: AI-mediated emotional expression and AI-mediated emotional regulation. However, the results instead pointed to a single broader dimension: AI-mediated emotional experience.
That finding suggests that young users may not clearly separate expressing emotions to AI from using AI to regulate emotions. In practice, those processes may blend together. A student who asks an AI system for advice during a stressful moment may also be organizing feelings, seeking reassurance and testing an interpretation of events. The emotional function is not always explicit, but it is present.
Additionally, no substantial differences were found by age or gender in the patterns examined. Instead, participants showed broadly similar forms of use, divided into instrumental relationships and emotional relationships. The emotional side was not dominant in all interactions, but it was significant enough to warrant closer study.
Emotional use develops from advice to validation and anthropomorphization
The authors identified three levels of emotional interaction with large language models: emergent emotional advice, validation and anthropomorphization. These levels form a progressive scale, moving from situational emotional support to deeper relational engagement.
Emergent emotional advice
It accounted for 60 percent of coded emotional interaction responses. In this mode, students used AI for immediate emotional guidance in specific situations. They did not necessarily treat the system as human-like or emotionally aware. Instead, they used it as an accessible, low-friction resource when they needed help understanding a situation, coping with stress or finding words for what they were feeling.
This form of interaction was often casual and episodic. It emerged because AI systems are available at any time through phones or computers and can provide fast, structured responses. The study stresses that this does not mean AI is replacing formal emotional support systems. Rather, it is becoming a temporary resource when young people face uncertainty, loneliness, stress or lack of immediate access to another person.
Validation
It accounted for 22.81 percent of coded emotional interaction responses. Here, users turned to AI not simply for advice but for confirmation. They sought reassurance that their interpretation of a situation was reasonable, that their decision made sense or that their feelings were understandable. This form of use involved more trust in the system's perceived neutrality.
The researchers warn that validation can create cognitive risks. Students may believe that AI offers a neutral or nonjudgmental response, even though LLMs are not neutral agents. Their outputs are shaped by training data, model design, alignment methods and interaction patterns. The study links validation practices to trust bias and selective attention, where users may give greater weight to responses that confirm what they already think or feel.
This risk becomes more serious when AI systems mirror user language or provide agreeable responses. The paper discusses mechanisms such as mirroring and sycophancy, which can make AI interactions feel emotionally safe and affirming. Mirroring occurs when the system reproduces a user's tone, style or emotional framing. Sycophancy occurs when the system tends to agree with or validate the user. These features can strengthen engagement but may also reinforce fragile assumptions, emotional dependency or uncritical trust.
Anthropomorphization
This accounted for 17.19 percent of coded emotional interaction responses. In this category, users attributed human-like qualities to AI, describing it as a companion, friend, classmate or work colleague. Some participants reported giving AI nicknames, thanking it, feeling listened to or seeing it as a presence that could understand them.
The authors treat anthropomorphization as the highest level of emotional involvement because it goes beyond advice or validation. It involves the attribution of mental states to the system. Users begin to act as though the model understands, accompanies or cares, even though it operates through statistical language generation rather than human intention.
The study links this process to two major theoretical frames. The first is Daniel Dennett's concept of the intentional stance, which explains how humans may interpret complex systems as if they had beliefs, desires or intentions. The second is Niklas Luhmann's theory of trust, which helps explain why people rely on opaque systems they do not fully understand. Large language models combine both conditions: they are technically complex and difficult for users to interpret, while also producing fluent conversational language that can feel socially meaningful.
That combination creates fertile ground for emotional projection. Users may not understand how the system works, but they can understand and respond to its language. Because language is deeply tied to social interaction, politeness, empathy and recognition, the system's conversational form can invite emotional interpretation.
Findings raise privacy, trust and mental health questions as AI becomes part of daily life
Emotional use is already developing, and it is not fully explained by declared attitudes. Young people may reject or minimize the idea that they emotionally rely on AI while still using it for emotional regulation, reassurance or companionship in specific contexts.
That pattern has important implications for policymakers, educators, developers and mental health researchers. If users do not openly recognize their emotional reliance on AI, risks may remain hidden. These risks include overtrust, privacy exposure, emotional dependency, cognitive overload and the gradual normalization of AI as a substitute for human validation.
The privacy issue is especially significant. Emotional interactions often involve sensitive personal information. Students seeking advice or validation may disclose relationship problems, insecurity, distress, family issues or personal doubts. The more they experience the system as neutral, available and nonjudgmental, the more likely they may be to share details that they would otherwise reserve for trusted people or professionals.
The study also warns that emotional engagement can emerge even during short use sessions. Most participants reported low daily use: 56 percent used generative AI for one to 15 minutes a day, while 22.86 percent used it for 16 to 30 minutes. Only 2.11 percent reported using it for more than an hour a day. Yet emotional interactions appeared across use categories. That finding challenges the assumption that emotional reliance develops only after prolonged daily use.
The researchers suggest that longer or more repeated interactions may increase relational complexity. Anthropomorphization was linked to sustained interaction, multiple conversational turns and broader consultation across both practical and personal topics. The more the model becomes woven into daily routines, the more likely users may be to assign it social or emotional meaning.
The authors suggest that young people may turn to AI for validation partly because of weakened or strained social bonds. In such cases, AI becomes a quick source of reassurance when friends, family or institutional supports are unavailable, insufficient or difficult to approach. The model may not be seen as human, but it can still function as an emotional resource.
LLMs are market products shaped by human feedback, alignment goals and engagement incentives, the study contends. As AI systems become more polite, responsive and emotionally fluent, they may become more attractive as affective tools, even when they are not designed or approved as mental health supports.
The authors also place their findings alongside prior research on affective AI use, including work by OpenAI and the MIT Media Lab that found emotional exchanges in a minority of ChatGPT interactions. Duque Parra and Santes Ortega argue that such uses may grow as generative AI becomes more embedded in everyday life.
It is worth mentioning that the sample is focused on Mexican university students aged 18 to 24, so the findings cannot be generalized to all young people or all cultural settings. The categories are exploratory and require further validation. The authors also acknowledge that the study does not offer a fully settled theory but a framework for future research.
- FIRST PUBLISHED IN:
- Devdiscourse
Google News