Academic performance drops as students rely heavily on ChatGPT
Information overload emerges as a dominant factor. Over 43% of students reported feeling overwhelmed by the volume and velocity of information provided by GenAI tools, while 54% said they didn’t achieve a deep understanding of the knowledge received from AI responses. With a cognitive load exceeding working memory capacity, students became less likely to reflect, analyze, or synthesize knowledge - hallmarks of higher-order learning.
A new academic investigation has identified a growing psychological threat in university classrooms: learning burnout triggered by the misuse of generative artificial intelligence. The peer-reviewed study, titled “Mitigating Learning Burnout Caused by Generative Artificial Intelligence Misuse in Higher Education: A Case Study in Programming Language Teaching”, was published in Informatics. It presents empirical evidence from China’s Shandong Institute of Petroleum and Chemical Technology showing that overreliance on tools like ChatGPT is degrading students’ motivation, cognitive engagement, and academic performance.
Surveying 143 undergraduate students across majors in computer science, digital economy, and IoT engineering, the study assessed emotional and cognitive responses to GenAI use in higher education. It then piloted a trio of pedagogical interventions, cheating detection software, peer-reviewed video assessments, and anonymous feedback channels, that collectively improved learning outcomes and reversed burnout trends. The study proposes a scalable, human-centered framework to help educators better integrate AI while preserving student autonomy and deeper learning.
What drives learning burnout in the GenAI era?
The research identifies five core burnout drivers tied to GenAI use in academic settings: information overload, technological overdependence, limitations of personalization, marginalization of educators, and declining intrinsic motivation.
Information overload emerges as a dominant factor. Over 43% of students reported feeling overwhelmed by the volume and velocity of information provided by GenAI tools, while 54% said they didn’t achieve a deep understanding of the knowledge received from AI responses. With a cognitive load exceeding working memory capacity, students became less likely to reflect, analyze, or synthesize knowledge - hallmarks of higher-order learning.
Overdependence on AI tools compounds this issue. Nearly 40% of students reported fewer opportunities for independent thinking when using GenAI, and a quarter said they relied on it for most of their assignments. While not all believed their critical thinking had declined, researchers warn this could reflect a lack of awareness about the subtle erosion of metacognitive abilities.
Personalized learning, often touted as a GenAI strength, was found to have mixed effects. Although most students didn’t report total isolation, 34% struggled to regulate their learning strategies. This suggests that highly customized content streams may inhibit students’ ability to set long-term academic goals or collaborate meaningfully with peers.
Further, the role of the teacher is shifting. While over 70% of students still valued educator input, more than a quarter preferred GenAI feedback to that of instructors. The report concludes that teachers risk being sidelined unless they reposition themselves as cognitive and motivational coaches.
Finally, the study found clear signs of declining academic motivation. Between 22% and 33% of students said GenAI made coursework too easy or diminished their interest in the subject. The authors link this trend to a weakening sense of challenge and achievement—a known threat to intrinsic motivation under self-determination theory.
How can educators address AI-driven burnout?
To counteract these dynamics, the study introduced and tested a three-part instructional response strategy. First, cheating detection tools were integrated into the OnlineJudge system to identify AI-generated or plagiarized content through timestamp analysis, standardized code patterns, and unnatural naming conventions. This helped educators proactively flag disengaged or at-risk students.
Second, a peer evaluation system was introduced. Students were required to record short problem-solving videos, which were then reviewed by three peers. This strategy forced learners to articulate their reasoning, reduced passive reliance on AI, and deepened conceptual understanding. It also fostered a sense of academic community and mutual accountability.
Third, anonymous feedback mechanisms were established. These included real-time surveys on workload and assignment difficulty, enabling instructors to adjust teaching intensity in response to student stress signals. The result was a flexible, data-driven classroom model that preserved rigor while supporting emotional regulation.
Across two implementation cycles, student acceptance of these interventions increased markedly. For example, 75% of students rated the video tasks as moderately difficult in later rounds (up from 5% initially), and the percentage completing them in under 30 minutes rose from 34% to 77%. Active acceptance of the format rose from 25% to 46%, while rejection fell by more than half.
Do these strategies improve academic outcomes?
Yes, according to the study’s performance metrics, these interventions delivered measurable academic benefits. When comparing two cohorts of students in a Python programming course, the group using the new model (Class of 2023) outperformed the control group (Class of 2022) across nearly all score brackets. High-scoring students (those achieving 80–100) increased from 36% to over 50%, while average scores rose from 75.7 to 78.6 out of 100.
These improvements were attributed to the reduction in AI-assisted shortcuts, higher student engagement, and better alignment between instructional practices and cognitive development needs. Interestingly, low-performing students (those below 60) remained relatively constant between groups, suggesting that the core benefits accrued to those already moderately engaged, but at risk of burnout.
The authors also emphasize that teacher attitudes and role adaptation are crucial for sustained results. Teachers must transition from knowledge dispensers to facilitators who foster motivation, design optimal challenges, and guide ethical AI use. Institutions, meanwhile, are urged to build capacity around digital literacy, AI ethics, and hybrid learning model design.
- READ MORE ON:
- generative AI in education
- AI misuse in classrooms
- AI and academic performance
- ChatGPT in higher education
- student engagement and AI
- effects of generative AI misuse on student motivation
- preventing burnout from AI tools in university education
- how ChatGPT impacts student learning and engagement
- digital learning burnout caused by AI overuse
- FIRST PUBLISHED IN:
- Devdiscourse

