AI and the self: How chatbots shape narrative identity in young adults
Young adults are increasingly turning to generative artificial intelligence (genAI) not only to complete tasks but to make sense of themselves, their relationships, and their life choices. What begins as a tool for productivity or information often evolves into something far more intimate: a conversational partner that helps users interpret emotions, rehearse social interactions, and reflect on personal identity.
A new peer-reviewed study titled Encountering Generative AI: Narrative Self-Formation and Technologies of the Self Among Young Adults, published in the journal Societies, analyses how young adults integrate generative AI chatbots into everyday practices of reflection, decision-making, and meaning-making, and how these interactions reshape the conditions under which selfhood is formed.
From digital tool to partner in self-understanding
According to the study, young adults often underestimate how deeply generative AI shapes their inner lives. Most participants initially described their use of ChatGPT as purely instrumental. They saw it as a faster alternative to search engines, a writing aid, or a planning assistant. However, as interviews unfolded, nearly all participants came to recognize that their interactions with the chatbot had emotional, relational, and self-formative dimensions they had not previously acknowledged.
Participants described using generative AI to interpret personal experiences, analyze recurring behavioral patterns, and test different ways of understanding themselves. Some uploaded long autobiographical texts and asked whether their life patterns made sense. Others sought explanations for emotional reactions, relationship dynamics, or recurring anxieties. These practices align closely with what narrative psychology identifies as autobiographical reasoning, a core process through which individuals construct a sense of continuity and identity across time.
The study draws on philosopher Paul Ricœur’s theory of narrative self to explain why these interactions matter. According to this framework, selfhood is not a fixed entity but an ongoing narrative achievement, created through the interpretation and organization of life events. By participating in this interpretive work, generative AI does more than provide information. It becomes involved in shaping how individuals understand who they are and how their experiences fit together.
The research also highlights a striking paradox. Participants often found AI-generated interpretations helpful and even comforting, while remaining aware that the system lacked genuine understanding. Many questioned the depth or accuracy of the insights they received but continued to value them for their ability to prompt reflection or provide a sense of clarity. This tension between perceived insight and recognized superficiality emerged as a defining feature of AI-mediated self-reflection.
The authors argue that this paradox helps explain why generative AI can exert such influence. The system does not need to be correct in an objective sense to be effective. Its value lies in its capacity to organize thoughts, suggest connections, and create a feeling of being understood, even when users know that understanding is simulated.
Efficiency, connection, and the quiet reshaping of relationships
The study shows that generative AI is increasingly used to navigate social and relational life. Participants reported consulting ChatGPT to draft messages, interpret social situations, and decide how to respond in moments of uncertainty. For many, the appeal lay in the chatbot’s availability and lack of judgment. It offered a low-risk space to rehearse interactions without fear of embarrassment or rejection.
These practices positioned AI as a form of social scaffold, supporting users as they managed complex interpersonal dynamics. However, the research also documents a more subtle shift. In some cases, AI began to replace human interlocutors for everyday consultation. Participants described asking ChatGPT questions they might otherwise have posed to friends, family members, or partners, particularly when the issues felt minor or emotionally awkward.
This pattern created a second major tension identified in the study: algorithmic support versus relational displacement. While AI reduced friction and made reflection easier, it also altered the ecology of everyday relationships. Some participants worried that relying on AI for small decisions or emotional reassurance could gradually reduce mundane human interactions that, while seemingly trivial, play an important role in maintaining social bonds.
The issue became more pronounced for participants who lacked strong social support networks. For these individuals, ChatGPT sometimes filled a genuine relational gap. It provided a sense of being heard and acknowledged in situations of loneliness or isolation. The authors note that while participants generally resisted describing AI as a friend, their accounts often revealed practices functionally similar to companionship.
This reluctance reflects broader cultural norms that privilege human connection as authentic and view algorithmic interaction as inferior or unsettling. Yet the study suggests that these norms coexist uneasily with everyday reliance on AI. Participants navigated this contradiction pragmatically, using generative AI while maintaining a critical distance from the idea of machine companionship.
From a theoretical perspective, the findings extend debates about digital sociality. Rather than replacing human relationships outright, generative AI reshapes patterns of consultation and support. It becomes part of a broader relational system in which individuals distribute different forms of interaction across human and non-human interlocutors. This redistribution may have cumulative effects on how selfhood is formed through dialogue and recognition.
Agency, decision-making, and the politics of self-governance
Participants reported using generative AI for a wide range of choices, from trivial preferences to significant life decisions. These included educational paths, political positions, personal relationships, and moral dilemmas. The ease of asking AI for guidance blurred traditional boundaries between low-stakes and high-stakes decisions.
The authors identify three modes of AI-assisted deliberation. In some cases, participants used ChatGPT for informational consultation, gathering facts or summarizing options. In others, they engaged in heuristic scaffolding, asking the system to structure pros and cons or clarify trade-offs while retaining final judgment. A third mode involved normative outsourcing, where participants followed AI recommendations with minimal critical evaluation.
Most participants moved fluidly between these modes rather than adhering to a single pattern. This flexibility challenges simplistic narratives about AI either empowering or undermining human agency. Instead, agency emerged as something negotiated in practice, shaped by context, habit, and emotional state.
However, the study raises concerns about how easily deliberative outsourcing can become routine. Several participants described moments when asking ChatGPT became automatic, replacing independent reflection or human consultation. Over time, this habitual reliance could subtly reshape dispositions toward thinking and decision-making, even if users remain capable of critical judgment.
To analyze these dynamics, the authors draw on Michel Foucault’s concept of technologies of the self. This framework emphasizes practices through which individuals work on themselves to achieve desired states such as competence, well-being, or moral clarity. Generative AI fits squarely within this tradition, offering tools for self-optimization, emotional regulation, and personal improvement.
Yet these technologies of the self are never neutral. They operate within broader regimes of power and knowledge that define what counts as a good or successful self. In the case of generative AI, the study suggests that norms of efficiency, productivity, and emotional management are encoded into the system’s responses. As users adopt these frameworks, often unconsciously, they participate in a form of self-governance shaped by algorithmic rationalities.
Privacy concerns further complicate this picture. Participants expressed unease about sharing intimate thoughts with AI systems whose data practices they did not fully understand. Some managed this risk by limiting disclosure or deleting chat histories, while others adopted a more resigned stance. These strategies highlight how vulnerability and autonomy intersect in AI-mediated self-reflection.
- FIRST PUBLISHED IN:
- Devdiscourse

