Why generative AI improves learning for some students but not others

Students’ attitudes toward generative AI and their satisfaction with using these tools are the strongest predictors of effective learning. Learning effectiveness in the study is defined broadly, encompassing skill development, motivation, perceived benefits, and understanding of academic material. Across all four dimensions, students who reported more positive attitudes and higher satisfaction also reported stronger learning outcomes.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-12-2025 21:29 IST | Created: 18-12-2025 21:29 IST
Why generative AI improves learning for some students but not others
Representative Image. Credit: ChatGPT

Generative artificial intelligence (genAI) tools are rapidly becoming embedded in higher education, altering how students study, complete assignments, and engage with academic content. While universities worldwide debate regulation and academic integrity, students are already integrating tools such as ChatGPT, Gemini, and Claude into daily coursework. A new empirical study suggests that the effectiveness of this shift depends less on institutional policies or technical training and more on how students perceive and experience these technologies.

The study, titled Learning with Generative AI: An Empirical Study of Students in Higher Education and published in Education Sciences, analyzes survey data from 485 college students across multiple disciplines to assess how generative AI influences learning effectiveness, attitudes, and satisfaction in academic settings.

The study examines the psychological and experiential dimensions of learning with generative AI. Its findings challenge the assumption that institutional training or ethical instruction alone can shape effective adoption, pointing instead to students’ perceptions, accumulated experience, and understanding of AI’s benefits and limitations as the central drivers of learning outcomes.

Attitudes and satisfaction: The core predictors of learning effectiveness

Students’ attitudes toward generative AI and their satisfaction with using these tools are the strongest predictors of effective learning. Learning effectiveness in the study is defined broadly, encompassing skill development, motivation, perceived benefits, and understanding of academic material. Across all four dimensions, students who reported more positive attitudes and higher satisfaction also reported stronger learning outcomes.

This finding suggests that generative AI does not function as a neutral academic aid. Instead, its educational value is mediated by students’ emotional and cognitive engagement with the technology. Students who view AI tools as helpful, innovative, and aligned with their learning goals are more likely to use them in ways that enhance comprehension and performance. Conversely, skepticism or discomfort with AI appears to limit its educational impact, even when the tools are readily available.

The research also finds a strong relationship between accumulated experience with generative AI and learning effectiveness. Students who reported greater familiarity with AI tools demonstrated higher satisfaction and more positive attitudes, alongside stronger perceptions of learning benefits. This pattern indicates that effective use of generative AI develops over time, as students learn how to frame queries, evaluate outputs, and integrate AI-generated content into their own thinking.

Frequency of use also plays a role, though its impact is more moderate. Regular engagement with generative AI correlates with improved satisfaction and perceived benefits, but the study notes that frequent use alone does not guarantee better learning outcomes. Instead, the quality of interaction and the student’s understanding of how and when to use AI appear to matter more than sheer volume of usage.

Interestingly, the variety of AI tools used shows only a weak association with learning effectiveness. Most students rely heavily on a small number of well-known platforms, suggesting that depth of engagement with a familiar tool may be more valuable than exposure to multiple systems. This finding runs counter to assumptions that broader tool adoption necessarily leads to better educational outcomes.

Perceived benefits outweigh ethical training and institutional support

The study brings to the fore the role of perceived advantages and disadvantages in shaping student experiences with generative AI. Students who strongly recognize benefits such as time savings, improved work quality, expanded learning options, and enhanced productivity are significantly more likely to report positive attitudes, higher satisfaction, and stronger learning effectiveness.

These perceived advantages appear to function as a motivational engine. When students believe that generative AI meaningfully supports their academic goals, they are more inclined to engage with course material, experiment with problem-solving approaches, and deepen their understanding. In this sense, perception acts as a bridge between technology and learning, translating technical capability into educational value.

On the other hand, perceived disadvantages have a measurable negative effect. Concerns about over-reliance on AI, erosion of critical thinking, reduced creativity, and weakening of academic skills are associated with lower learning effectiveness and more negative attitudes. The study shows that students who focus on these risks tend to report diminished benefits, even if they continue using AI tools for practical reasons.

Ethical knowledge, while statistically significant, plays a surprisingly limited role. Awareness of issues such as data privacy, misinformation, bias, and academic responsibility shows only weak positive correlations with learning effectiveness and satisfaction. This suggests that ethical understanding alone does not substantially influence how students experience generative AI as a learning tool.

Even more striking is the finding related to institutional training. Formal training provided by academic institutions shows no meaningful relationship with learning effectiveness, attitudes, or satisfaction. The study attributes this result to the scarcity of such training, with most students reporting little to no formal instruction on using generative AI. As a result, students appear to be learning how to use these tools independently, guided by trial and error rather than structured academic support.

This gap highlights a disconnect between institutional responses and student behavior. While universities debate guidelines and compliance, students are already incorporating AI into coursework without systematic guidance. The absence of training does not prevent adoption, but it may limit students’ ability to use AI critically and ethically.

Implications for universities facing rapid AI adoption

The study suggests that simply providing access to tools or issuing usage policies is insufficient. Effective integration depends on shaping students’ experiences, perceptions, and confidence in using AI as part of the learning process.

Rather than focusing narrowly on enforcement or prohibition, the research points toward a need for pedagogical strategies that integrate generative AI into coursework in transparent and purposeful ways. Students already use AI to summarize materials, clarify concepts, generate ideas, draft assignments, and practice skills. Aligning these uses with learning objectives could transform informal adoption into structured educational benefit.

The limited impact of ethical knowledge and institutional training also raises questions about how ethics is currently taught. The study indicates that ethical considerations may be perceived as external constraints rather than integral aspects of learning. Embedding ethical reflection directly into academic tasks, rather than treating it as a standalone requirement, may be more effective in influencing student behavior.

For educators, the findings underscore the importance of addressing student concerns openly. Fear of dependency, loss of originality, and erosion of skills are not abstract worries but tangible factors shaping learning outcomes. By engaging students in discussions about strengths and limitations of generative AI, instructors can help students develop more balanced and productive approaches to its use.

The research also suggests that faculty development is critical. Many lecturers may lack familiarity with generative AI tools or feel unprepared to integrate them into teaching. Without institutional support, the burden of adaptation falls unevenly across departments and individuals, potentially widening gaps in educational quality.

At a policy level, the study challenges the assumption that technology itself drives transformation, highlighting instead the human factors that mediate impact. Learning effectiveness emerges not as a direct consequence of AI capability, but as the result of experience, perception, and engagement.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback