Everyday AI use can slowly shift control away from human thinking
With generative AI systems becoming embedded in professional, academic, and creative workflows, new research is raising concerns not about performance loss, but about a more subtle shift: the gradual weakening of human agency and authorship in everyday cognition. What appears as productivity gain may, in parallel, be changing the relationship individuals have with their own thinking.
The study, "Psychological absorption and the maintenance of agency in AI-mediated cognition: a relapse prevention perspective," published in AI & Society, introduces a conceptual framework to explain how sustained reliance on generative AI can lead to a gradual externalization of cognitive processes. The research presents AI use as a process of "psychological absorption," where thinking, decision-making, and meaning construction are increasingly delegated to AI systems, often without conscious awareness.
Gradual delegation, not addiction, reshapes human thinking
The study challenges dominant narratives that focus on AI as either a productivity enhancer or a potential source of technological dependency. Instead, it proposes a more nuanced view: the erosion of agency does not occur suddenly or through compulsion, but through a series of small, rational decisions made under pressure.
In modern work environments defined by speed, cognitive overload, and constant evaluation, turning to AI is often the most efficient response. Whether drafting reports, structuring arguments, or refining language, AI offers immediate clarity and coherence. These benefits reinforce repeated use, gradually normalizing the delegation of tasks that were previously central to human cognition.
The researchers draw on relapse prevention theory to explain this process. Traditionally used to understand behavioral patterns in addiction, the framework is adapted here to describe how agency can erode incrementally. The concept of "apparently irrelevant decisions" becomes central: small, routine choices to rely on AI for initial drafts, idea refinement, or problem-solving accumulate over time, shifting the individual's role from originator of thought to curator of externally generated content.
This shift does not impair performance. In many cases, it improves it. Outputs become more polished, structured, and efficient. This is precisely why the process remains largely invisible. There is no immediate failure, no clear signal that something is being lost. Instead, the study identifies a growing disconnect between producing ideas and experiencing oneself as their author.
This phenomenon is especially pronounced in domains tied to personal identity and expertise. When individuals delegate tasks that define their professional or intellectual role, the impact on agency is deeper. Over time, the act of thinking itself may begin to feel external, triggered by prompts rather than internally generated.
The shift from thinking to prompting alters authorship
The staged model of "AI absorption" outlines how reliance on AI evolves from a helpful tool into a structural component of cognition. In early stages, AI functions as a support system. Individuals use it to reduce cognitive load while maintaining control over ideas and decisions. However, as reliance increases, a qualitative shift occurs. Thinking becomes less about internal exploration and more about constructing effective prompts. The cognitive effort moves from generating ideas to eliciting them.
This transition marks a turning point. The individual is no longer primarily engaged in thinking but in managing outputs. While they may still evaluate, edit, and refine AI-generated material, the core act of idea formation has been externalized. Authorship becomes indirect.
The study describes how this leads to a recalibration of self-efficacy. People do not necessarily lose confidence in their abilities, but they begin to perceive unaided thinking as slower, less efficient, and less valuable. This subtle shift in cognitive self-trust reinforces further reliance on AI, creating a feedback loop.
Over time, this process can result in what the researchers call "experiential flattening." Despite producing high-quality work, individuals report a reduced sense of engagement or ownership. The output meets expectations, but the internal experience of having created something meaningful diminishes.
This effect is not driven by technological failure but by success. AI systems deliver coherent, articulate, and contextually appropriate outputs that resemble human reasoning. Because these outputs align with user intentions, they are easily accepted as one's own, even when the underlying cognitive effort has been outsourced.
The study identifies this as a key psychological risk of generative AI: it mimics the experience of thinking without requiring the effort that typically produces ownership. The result is a growing gap between recognition and generation, where individuals recognize ideas as valid but cannot fully reconstruct or internalize them.
Maintaining agency becomes a psychological and institutional challenge
AI use is a "maintenance challenge" rather than a problem to be solved through restriction, the study states, arguing that the goal is not to reduce AI usage but to sustain psychological presence while using it. This requires awareness of how and when cognitive functions are delegated, as well as deliberate efforts to reclaim authorship.
Early warning signs of absorption are subtle and often overlooked. These include reduced tolerance for ambiguity, impatience with unstructured thinking, and a tendency to externalize initial ideas before engaging with them internally. Because these changes do not affect performance, they are rarely recognized as risks.
To address this, the researchers propose simple but structured practices. These include initiating tasks without AI assistance, introducing pauses before using external tools, and actively reflecting on whether generated outputs are genuinely understood and owned. The focus is on restoring the internal processes that support meaning-making, rather than rejecting technological assistance.
The study also highlights the role of institutions in shaping cognitive behavior. Educational systems and workplaces that prioritize speed, efficiency, and polished outputs may inadvertently encourage unreflective AI use. Without structures that support independent thinking and reflection, individuals are more likely to rely on AI as a default.
Emerging approaches in professional training are beginning to incorporate AI supervision, requiring users to explain and justify outputs. However, the study argues that this is not enough. Evaluation must move beyond accuracy to include authorship, asking not only whether something is correct but whether it is truly understood.
If AI continues to shape how knowledge is produced, societies may need to redefine what it means to think, learn, and create. The challenge is not technological advancement but the preservation of human engagement within that advancement.
- FIRST PUBLISHED IN:
- Devdiscourse