Overreliance on generative AI reduces academic performance
According to the researchers, this dynamic unfolds as students shift from using GenAI as a collaborative learning partner to treating it as a substitute for their own thinking. Such dependency reduces engagement with core learning processes, weakening the development of critical reasoning and problem-solving skills. The result is diminished academic performance, despite the appearance of productivity that GenAI tools may initially offer.
A new study published in Behavioral Sciences highlights the growing concern over university students’ reliance on generative artificial intelligence (GenAI) tools and its unintended consequences for learning. The research warns that excessive dependence on GenAI can undermine students’ academic performance by inflating their self-confidence in ways that hinder genuine skill development.
Titled “Effect of GenAI Dependency on University Students’ Academic Achievement: The Mediating Role of Self-Efficacy and Moderating Role of Perceived Teacher Caring”, the study provides one of the most detailed examinations yet of how AI tools affect student learning outcomes. By surveying 418 students from public universities in China, the authors reveal the nuanced psychological mechanisms behind the relationship between AI use and academic achievement.
How GenAI dependency affects student performance
The study’s key finding is that GenAI dependency negatively predicts academic achievement. While AI-powered tools offer convenience and can support certain tasks, students who rely too heavily on them often develop false self-efficacy, an inflated sense of their own competence. This overconfidence, resembling the well-documented Dunning–Kruger effect, can lead students to overestimate their skills, skip essential learning steps, and struggle when faced with complex academic tasks that require genuine understanding.
According to the researchers, this dynamic unfolds as students shift from using GenAI as a collaborative learning partner to treating it as a substitute for their own thinking. Such dependency reduces engagement with core learning processes, weakening the development of critical reasoning and problem-solving skills. The result is diminished academic performance, despite the appearance of productivity that GenAI tools may initially offer.
A significant insight from the study is the mediating role of self-efficacy. The authors found that the negative link between GenAI dependency and academic performance is explained by changes in students’ self-efficacy. In other words, reliance on GenAI affects how students perceive their abilities, and these distorted perceptions directly influence their academic results.
Role of teacher support in mitigating risks
In addition to the direct relationship between GenAI use and learning outcomes, the study examines how perceived teacher caring shapes students’ experiences. The researchers identify teacher caring as a crucial moderating factor that can reduce the harmful effects of GenAI dependency.
When students feel supported, guided, and encouraged by their teachers, the tendency to over-rely on GenAI diminishes. Teacher presence and active involvement create a learning environment where AI is framed as a tool for exploration and collaboration rather than as a shortcut to avoid effort. This supportive atmosphere helps students maintain a realistic sense of their capabilities, improving their capacity to use AI effectively and responsibly.
However, the authors caution that while teacher caring mitigates the overconfidence generated by GenAI, it does not by itself fully resolve the negative impact on academic achievement. The findings suggest that interventions need to go beyond interpersonal support to include structured educational strategies, such as integrating critical AI literacy into curricula and guiding students on how to balance AI assistance with personal effort.
Implications for education in the age of AI
The study warns that simply providing access to AI tools without addressing their psychological and pedagogical impacts can inadvertently harm student learning.
The authors argue that educational institutions should focus on fostering authentic self-efficacy, confidence grounded in actual skills and understanding rather than inflated by AI-generated output. This requires a dual approach: ensuring that students develop the cognitive and metacognitive abilities to critically engage with AI tools, and equipping teachers with strategies to guide students in using these tools constructively.
Moreover, the research highlights the need for a balanced relationship between technology and human guidance in academic settings. As GenAI becomes more sophisticated and accessible, students must learn not only to use these tools but also to recognize their limitations. Overreliance can lead to shallow learning and dependency, while mindful integration can enhance learning outcomes.
The findings suggest that universities should adopt policies and practices that encourage collaborative rather than exploitative use of GenAI, such as incorporating AI-assisted assignments that still require students to demonstrate individual reasoning and problem-solving.
- FIRST PUBLISHED IN:
- Devdiscourse

