Generative AI proven to strengthen student reasoning skills, especially problem-solving

According to the study, generative AI produces a moderately strong positive effect on higher-order thinking overall. Across studies, AI-assisted learning interventions consistently outperformed traditional approaches in helping students evaluate information, construct deeper reasoning and generate multiple solutions. The strongest gains appear in problem-solving, an area where AI tools can model strategic thinking, provide adaptive feedback and help learners explore alternative pathways. Critical thinking also shows notable improvement, while creativity benefits to a slightly lesser extent but still demonstrates reliable positive trends.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 09-12-2025 21:50 IST | Created: 09-12-2025 21:50 IST
Generative AI proven to strengthen student reasoning skills, especially problem-solving
Representative Image. Credit: ChatGPT

Generative artificial intelligence (genAI) may be transforming the way students learn to analyze, evaluate and solve complex problems. A comprehensive meta-analysis published in Journal of Intelligence shows that generative AI delivers a meaningful, measurable improvement in students’ higher-order thinking performance.

The study, titled Does Generative Artificial Intelligence Improve Students’ Higher-Order Thinking? A Meta-Analysis Based on 29 Experiments and Quasi-Experiments evaluates how AI-driven learning tools influence students’ problem-solving, critical thinking and creativity. The research provides a consolidated assessment of benefits, boundaries and conditions under which generative AI strengthens advanced cognitive learning.

Meta-analysis reveals consistent improvement across key cognitive domains

According to the study, generative AI produces a moderately strong positive effect on higher-order thinking overall. Across studies, AI-assisted learning interventions consistently outperformed traditional approaches in helping students evaluate information, construct deeper reasoning and generate multiple solutions. The strongest gains appear in problem-solving, an area where AI tools can model strategic thinking, provide adaptive feedback and help learners explore alternative pathways. Critical thinking also shows notable improvement, while creativity benefits to a slightly lesser extent but still demonstrates reliable positive trends.

The analysis reveals that generative AI works as a powerful scaffold, supporting learners as they tackle tasks that require planning, synthesis or abstraction. Tools that generate multiple representations, clarify misconceptions or offer progressive hints appear particularly effective in pushing learners toward deeper engagement. Students not only complete tasks more effectively but also show stronger cognitive persistence, exploring more reasoning steps and comparing alternatives more systematically.

However, the results show that not all interventions yield equal benefits. The researchers identify significant variation across studies, reflecting that generative AI's impact depends on how thoughtfully it is integrated. When AI is used superficially, such as for simple content suggestions or single-step answers, its contribution to higher-order skills appears weak. By contrast, when AI tools require students to revise, justify, reflect or evaluate, the cognitive gains become more pronounced. This suggests that instructional design remains the critical driver of learning quality, with AI serving as a catalyst rather than a replacement for intentional pedagogy.

The meta-analysis asserts that generative AI’s role in cognitive development must be informed by how learners interact with the system. When used to support inquiry-based, open-ended or iterative learning processes, AI strengthens the habits and mental routines associated with higher-order thinking. The authors stress that these gains depend on careful task alignment, structured reflection opportunities and meaningful human guidance.

Duration and self-regulation identified as key drivers of AI’s effectiveness

Two moderating factors emerge as decisive in shaping the impact of AI on higher-order cognition: the duration of the AI-supported intervention and the learner’s level of self-regulated learning.

The authors find that the most substantial improvements occur when students engage with generative AI for a period of eight to sixteen weeks. Shorter interventions fail to give learners enough time to internalize patterns of strategic thinking or adapt to the feedback mechanisms embedded in AI tools. Extremely long interventions appear to reduce engagement, suggesting that sustained but well-structured exposure provides the optimal cognitive benefit.

Self-regulated learning plays an equally influential role. Students with strong self-regulation, those who plan their study, evaluate their progress, and adapt their strategies, experience far greater gains from AI-supported instruction. Generative AI amplifies these strengths by offering iterative feedback, modeling alternative approaches and prompting deeper reflection. Conversely, students with weak self-regulation tend to rely passively on AI outputs, reducing opportunities to build genuine cognitive independence. This dependence can limit growth in higher-order skills and underscores the need for human-guided scaffolding, especially during initial stages of AI adoption.

Although educational level and instructional method do not emerge as statistically significant moderators, descriptive patterns suggest that K–12 environments show slightly greater responsiveness to AI-supported learning, likely due to structured classroom guidance. Project-based and inquiry-oriented learning formats also appear more compatible with generative AI, offering students opportunities to apply reasoning, evaluation and long-term planning.

While AI can support cognitive development, it does not inherently foster discipline, persistence or metacognitive awareness. These traits must be cultivated through instructional design and learner support systems, ensuring that AI complements rather than replaces reflective practice.

Implications for educators and future directions for AI-supported learning

The meta-analysis offers clear guidance for educators, policymakers and developers contemplating the integration of generative AI into learning environments. Its findings reinforce that generative AI can meaningfully enhance higher-order thinking, but only when implemented deliberately.

First, instructional design must ensure that AI tools are used to promote active reasoning rather than passive content consumption. Tasks that require comparison, justification, revision or multi-step reasoning appear particularly effective. Educators should avoid activities where AI simply produces answers without requiring student interpretation, as these approaches have minimal cognitive benefit.

Second, the study highlights the importance of scaffolding self-regulated learning. Schools and universities implementing AI must provide explicit training in planning, monitoring and evaluating one’s work. As AI tools become more integrated into curriculum design, educators will need to cultivate stronger metacognition and reflective skills to prevent overreliance and ensure the development of genuine analytical capacity.

Third, intervention length must be managed strategically. Short-term pilots are unlikely to showcase AI’s full potential, while long-term unrestricted use may diminish student motivation or introduce dependency risks. Structured, time-bound programs, especially those lasting several months, appear to deliver the most consistent gains.

The authors acknowledge several limitations. Much of the existing research comes from a limited geographic and linguistic range, with many studies conducted in English and Chinese contexts. As generative AI technologies evolve rapidly, newer tools may produce different outcomes than those documented in earlier studies. The researchers call for broader samples, longitudinal designs and cross-cultural evaluations to strengthen the generalizability of findings.

They also advocate for more research into how AI tools can support creativity, noting that while creativity improved in the meta-analysis, the effect was smaller than for other higher-order skills. Understanding how AI might foster divergent thinking without constraining originality remains a key open question.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback