AI dependence could reshape how people think, learn and make decisions
Generative artificial intelligence (GenAI) is rapidly becoming embedded in education, work, and everyday decision-making, but a new study warns that its influence goes far beyond productivity gains. Researchers argue that the real impact of AI lies in how humans relate to it, with outcomes ranging from genuine intellectual growth to a subtle erosion of independent thinking. Their work reframes the debate around AI, shifting attention from what the technology can do to how users engage with it at a psychological and epistemic level.
The study, titled “The Relational–Epistemic Stance: Generative AI as a Dynamic Transitional Object,” published in AI & Society, introduces a new theoretical framework to explain why interactions with generative AI produce sharply different outcomes across individuals and contexts.
AI as a developmental interface, not just a tool
The study challenges dominant approaches that treat AI as an extension of human cognition or as a purely instrumental system. Existing frameworks such as the extended mind thesis and distributed cognition have emphasized how technology can augment human thinking, but they fail to explain why similar AI tools can lead to both enhanced understanding and superficial engagement.
The researchers argue that generative AI occupies a unique position because it does not simply assist with tasks but actively participates in the production of meaning. Unlike earlier tools, AI systems generate language, ideas, and structured outputs that resemble human thought processes. This creates a relational dynamic in which users do not just use AI but interact with it in ways that shape their own cognitive processes.
To capture this dynamic, the study draws on object relations theory, particularly the work of Donald Winnicott, Wilfred Bion, and Christopher Bollas. In this framework, a “transitional object” is something that exists between the internal and external world, helping individuals develop the capacity to think, reflect, and make sense of experience. The researchers extend this concept to AI, describing it as a “dynamic transitional object” that evolves in real time through interaction.
This reconceptualization has significant implications. It suggests that AI is not simply replacing or augmenting cognition but becoming part of the developmental process itself. The quality of this process depends on how users engage with AI outputs, whether they critically process and integrate them or passively accept them as finished products.
The study identifies a key process called “metabolization,” which refers to the transformation of external input into internally owned knowledge. In the context of AI, metabolization involves actively working through generated content, questioning it, reshaping it, and integrating it into one’s own understanding. When this process occurs, AI can function as a powerful cognitive scaffold that enhances learning and creativity.
However, the absence of metabolization leads to a very different outcome. Without this internal processing, users may adopt AI-generated content without fully understanding or owning it, creating an illusion of competence rather than genuine knowledge.
Risk of epistemic seduction and simulated competence
A key concern raised by the study is what the authors describe as “epistemic seduction.” This refers to the tendency of users to be drawn toward the fluency, coherence, and apparent authority of AI-generated outputs. Because generative AI produces responses that are polished and contextually appropriate, users may assume that these outputs are reliable and complete, reducing the motivation to engage critically with the material.
This dynamic introduces a new form of cognitive risk. Instead of traditional misinformation or error, the danger lies in the substitution of genuine thinking with simulated understanding. Users may feel confident in their knowledge because they can produce well-structured answers with the help of AI, even if they have not internalized the underlying concepts.
The study highlights that this risk is particularly pronounced in environments where efficiency and output are prioritized over process. In educational settings, for example, students may rely on AI to generate assignments, bypassing the cognitive effort required for learning. In professional contexts, workers may use AI to produce reports or analyses without fully engaging with the material, potentially weakening their expertise over time.
This phenomenon is not simply a matter of misuse but reflects a deeper psychological dynamic. The relational–epistemic stance of the user determines whether AI is approached as a partner in thinking or as a substitute for it. When users adopt a stance that prioritizes convenience and speed, they are more likely to fall into patterns of passive reliance.
The study also points to the role of uncertainty in shaping these interactions. Generative AI often provides definitive-sounding answers, even in situations where uncertainty is inherent. This can reduce users’ tolerance for ambiguity and discourage exploratory thinking, further reinforcing dependence on AI outputs.
Importantly, the researchers argue that epistemic seduction is not an inevitable outcome of AI use. It emerges under specific conditions, particularly when users lack awareness of the need for active engagement or when institutional structures incentivize quick results over deep understanding.
A new framework for responsible AI engagement
To address these challenges, the study introduces the concept of the relational–epistemic stance as a central variable in human-AI interaction. This stance reflects how individuals position themselves in relation to AI, including their expectations, level of trust, and willingness to engage critically with generated content.
A constructive stance is characterized by active engagement, critical reflection, and a willingness to tolerate uncertainty. Users operating from this position treat AI outputs as provisional inputs rather than final answers, using them as a starting point for further thinking. In this mode, AI can enhance creativity, support problem-solving, and facilitate deeper learning.
On the other hand, a passive stance is marked by uncritical acceptance, overreliance, and a focus on efficiency. Users in this mode are more likely to accept AI outputs at face value, leading to shallow understanding and reduced cognitive autonomy.
These stances are not fixed traits but can shift depending on context, task, and user awareness. This suggests that interventions at the level of education, design, and policy can influence how individuals engage with AI.
From an educational perspective, the findings highlight the need to teach not only how to use AI tools but how to think with them. This involves developing skills in critical evaluation, reflective thinking, and the ability to integrate external inputs into internal knowledge structures. Educators are increasingly faced with the challenge of designing learning environments that encourage metabolization rather than substitution.
For organizations, the study calls for aligning AI adoption with long-term capability development. While AI can improve efficiency in the short term, overreliance without critical engagement may lead to a gradual erosion of expertise. Firms must balance the use of AI for productivity with the need to maintain and develop human skills.
The research also has implications for AI design. Systems that encourage interaction, reflection, and exploration may be more likely to support constructive engagement. Conversely, designs that emphasize speed and seamless output risk reinforcing passive use patterns.
At a broader level, the study calls for a shift in how AI is conceptualized in public discourse. Rather than focusing solely on performance metrics or economic impact, there is a need to consider the developmental and epistemic consequences of AI use. This includes examining how AI shapes not only what people do but how they think.
The findings suggest that the future of AI will depend as much on human factors as on technological advancements. As generative systems become more capable, the challenge will be to ensure that they support rather than undermine human agency and intellectual growth.
- FIRST PUBLISHED IN:
- Devdiscourse

