AI helps reduce learning gaps in foundational skills

Teacher readiness and AI literacy also emerge as critical factors. While educators expressed openness to AI-supported personalization, effective use depended on understanding both the capabilities and limits of the technology. Teachers needed to interpret AI outputs, align them with curricular goals, and intervene when students reached cognitive plateaus. Without this professional judgment, personalization risked becoming mechanistic.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 31-01-2026 18:42 IST | Created: 31-01-2026 18:42 IST
AI helps reduce learning gaps in foundational skills
Representative Image. Credit: ChatGPT

Artificial intelligence promises more equitable learning outcomes in diverse classrooms, but new research suggests that while AI-driven personalization can address some learning gaps, its impact depends heavily on how it is integrated into existing teaching practices.

These findings are detailed in a study Artificial Intelligence and Learning Gaps: Evaluating the Effectiveness of Personalized Pathways, published in Applied Sciences, which analyzes the effectiveness of AI-supported instruction in secondary education.

Personalized pathways show measurable gains in foundational learning

The study focuses on a persistent structural problem in education: learning gaps that accumulate when students advance through curricula without mastering foundational knowledge. These gaps are particularly pronounced in under-resourced schools, where large class sizes and standardized instruction limit teachers’ ability to respond to individual needs. The researchers designed their study to test whether generative AI could function as a scalable support mechanism without replacing the teacher’s role.

The research followed 21 eighth-grade students over a 24-week period in a public school in Bogotá, Colombia. The students experienced alternating instructional cycles, comparing standard whole-class teaching with AI-supported Personalized Learning Pathways. These pathways were generated using large language models and tailored to each student’s diagnosed learning gaps, prior performance, and engagement patterns. The study deliberately embedded AI within the existing classroom structure rather than isolating it as a separate intervention.

Across three instructional cycles, the results were consistent. Students receiving AI-generated personalized pathways showed stronger improvement in lower-order cognitive skills, including basic knowledge acquisition, comprehension, and application of concepts. These are the foundational levels of learning that often determine whether students can keep pace with more advanced material later on.

The improvement was especially notable among students who had previously struggled under homogeneous instruction. AI-supported pathways helped these learners revisit missing prerequisites, practice concepts at an appropriate pace, and receive explanations adapted to their immediate needs. In effect, the personalized pathways acted as a form of academic remediation embedded within ongoing instruction, rather than a separate remedial program.

The study also highlights the practical feasibility of this approach. Teachers did not need to redesign curricula or abandon standardized lesson plans. Instead, AI-generated pathways supplemented classroom instruction, allowing students to work through targeted activities while teachers maintained oversight and instructional coherence. This hybrid model is presented as a key reason the intervention succeeded at the foundational level.

The findings suggest that generative AI’s greatest strength lies not in accelerating high-performing students, but in stabilizing those at risk of falling behind. By systematically addressing gaps in basic understanding, personalized pathways reduced performance divergence within the classroom, supporting a more equitable learning environment.

Higher-order thinking remains resistant to automation

While the gains in foundational learning were clear, the study draws a firm boundary around what AI personalization can achieve on its own. When students moved beyond basic comprehension into higher-order cognitive tasks, the results became far more uneven. Skills such as analysis, synthesis, evaluation, and transfer to new contexts showed limited and inconsistent improvement under AI-supported pathways.

In some cases, students demonstrated incremental progress, but these gains were neither uniform nor sustained across the cohort. As task complexity increased, many learners continued to struggle despite personalized AI support. The researchers interpret this pattern as evidence that higher-order thinking depends on more than individualized content delivery.

Advanced cognitive skills require structured dialogue, guided reasoning, feedback loops, and opportunities for collaborative problem-solving. These elements remain difficult to replicate through automated systems alone. While AI-generated explanations can clarify concepts, they cannot fully replace the cognitive scaffolding provided by skilled teachers and peer interaction.

The study emphasizes that higher-order learning is cumulative. Students who entered the intervention with weaker foundational skills found it particularly difficult to progress into advanced reasoning tasks, even after gaps were partially addressed. This reinforces the idea that AI-supported personalization is most effective as a stabilizing mechanism rather than a shortcut to complex thinking.

Teacher mediation emerged as a decisive factor in cases where higher-order gains did occur. When educators actively engaged with students, prompted reflection, and contextualized AI-generated materials within broader learning goals, students were better able to apply and extend their knowledge. Without this human guidance, AI personalization risked becoming repetitive rather than transformative.

These findings challenge narratives that present generative AI as a substitute for instructional expertise. Instead, the research positions AI as a support tool whose effectiveness is tightly coupled with pedagogical design and human oversight. Automation alone does not produce deep learning, particularly in cognitively demanding domains.

Implications for equity, learning design, and AI policy in education

The research setting, a low-income public school, is central to its significance. Learning gaps are most damaging in such contexts, where students have fewer external resources to compensate for instructional shortcomings.

By demonstrating that AI-generated personalized pathways can reduce foundational learning gaps without extensive infrastructure changes, the study suggests a potentially scalable approach for resource-constrained systems. However, it also warns against overstating AI’s capacity to solve structural educational inequalities on its own.

The authors examine the role of learning styles as part of personalization, acknowledging their influence on student engagement while cautioning against rigid classification. Learning preferences were used pragmatically to vary explanations and activities, but the study avoids treating them as fixed or determinative traits. Instead, continuous formative assessment guided adjustments to each student’s pathway.

This approach reflects a broader theme in the research: personalization must remain dynamic and evidence-based. Overfitting instruction to static profiles risks reinforcing limitations rather than expanding capabilities. AI systems, the study argues, should adapt to learner progress rather than confining students within predefined categories.

Teacher readiness and AI literacy also emerge as critical factors. While educators expressed openness to AI-supported personalization, effective use depended on understanding both the capabilities and limits of the technology. Teachers needed to interpret AI outputs, align them with curricular goals, and intervene when students reached cognitive plateaus. Without this professional judgment, personalization risked becoming mechanistic.

The study also raises important questions about overreliance. Students who depended heavily on AI-generated explanations showed less initiative in tackling unfamiliar problems. This finding underscores the need for deliberate instructional design that encourages autonomy, metacognition, and critical engagement, rather than passive consumption of personalized content.

From a policy perspective, the research reinforces the case for hybrid human–AI models in education. Investment in generative AI tools should be accompanied by teacher training, curriculum integration strategies, and clear pedagogical frameworks. AI adoption without these supports may improve surface-level outcomes while leaving deeper learning untouched.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback