AI can advance inclusive learning only with major policy and training reforms
Low digital literacy among educators is another critical factor slowing AI integration. Many teachers reported limited confidence in selecting, evaluating, and managing AI tools. Without training in both technical and ethical aspects of AI, educators remain hesitant to rely on it for inclusive teaching. Studies from Jordan, Japan, and Nigeria showed that teachers struggled to interpret learning analytics dashboards, adapt AI outputs to student needs, or troubleshoot issues without technical support. This lack of preparedness weakened the potential benefits of AI tools, even when they were available.
Major structural, ethical, and pedagogical challenges still threaten to limit the impact of artificial intelligence in promoting inclusive education, according to a new global analysis of empirical evidence. The findings call for stronger teacher preparation, ethical safeguards, and theory-driven design frameworks to ensure that students with disabilities genuinely benefit from AI-supported learning.
The study, titled “Exploring Artificial Intelligence in Inclusive Education: A Systematic Review of Empirical Studies” and published in Applied Sciences, examines 16 peer-reviewed investigations conducted between 2020 and 2025. The authors collect findings from multiple continents to assess whether artificial intelligence is improving learning outcomes, enhancing engagement, and strengthening inclusion for students with diverse educational needs. Their review also evaluates the obstacles educators face in adopting AI tools and how theoretical models guide or fail to guide AI integration in real classrooms.
While evidence points to steady improvements in personalized learning and student motivation, the authors warn that weak infrastructure, low teacher readiness, and inconsistent theoretical grounding may undermine long-term progress.
AI improves learning outcomes but gains are uneven across student groups
Across the reviewed studies, AI systems consistently supported personalization, improved accessibility, and increased learner engagement for students with disabilities and special needs. Intelligent tutoring systems emerged as one of the strongest contributors to academic improvement. In South Africa, the MathU tutoring platform offered differentiated learning pathways and real-time adaptive feedback, allowing students in inclusive mathematics classrooms to work at their own pace. Learners who often struggled to keep up in traditional settings demonstrated improved mastery, reduced frustration, and greater autonomy.
Other studies reported measurable improvements in comprehension, retention, and academic performance among students with intellectual disabilities, autism spectrum conditions, and learning disorders. AI-powered tools such as speech-to-text applications, translation features, automatic prompts, and multimodal content supported students who faced barriers in reading, writing, or verbal communication. In Saudi Arabia, a randomized controlled trial involving school-aged students with mild intellectual disabilities showed that those using personalized AI applications performed better in reading, writing, mathematics, and science compared to peers receiving standard instruction. These skills remained stable in follow-up assessments, underscoring the durability of AI-supported learning.
The authors note that emotional and cognitive engagement rose sharply when AI tools were embedded into classroom routines. Studies from Peru, India, and China showed that interactive features, adaptive pacing, and gamification created learning environments that kept students focused for longer periods. Tools such as chatbots, voice-based assistants, and gamified learning applications helped students sustain interest and participate more confidently.
Yet the positive effects were not universal. Some AI interventions prioritized diagnostic insights over sustained instructional scaffolding. Several tools helped identify learning gaps but lacked depth in follow-up content or differentiation. Research conducted among students with complex or multiple disabilities found limited improvement, largely because existing AI applications were not designed with high-needs learners in mind. In some cases, AI responses were overly simplistic or lacked nuance, limiting their usefulness in tasks requiring higher-order reasoning.
Even when engagement rose, not all learners found AI-mediated activities comfortable. Students in the UAE study reported that AI-facilitated discussions encouraged deeper thinking but occasionally felt rigid or impersonal due to limited social cues and reduced spontaneity. Without careful design, AI risks reinforcing rather than removing barriers for students who rely heavily on relational cues and contextual support.
Despite these limitations, the overall evidence shows that AI is likely to play a key role in strengthening inclusive education if tools evolve from assessment-heavy systems into more holistic instructional partners. The most successful interventions combined adaptive technologies with strong pedagogical alignment and teacher facilitation.
Teacher readiness, infrastructure, and trust remain the biggest obstacles
The adoption of AI tools in inclusive education remains inconsistent and heavily dependent on teacher readiness, institutional backing, and access to resources. The review identifies infrastructure shortages as one of the most persistent and widespread barriers. Limited access to reliable digital devices, weak connectivity, outdated equipment, and unclear procurement policies appeared across studies conducted in Asia, Africa, South America, and the Middle East. These gaps do more than restrict adoption; they deepen equity divides by making AI tools readily accessible only in better-resourced schools and systems.
Low digital literacy among educators is another critical factor slowing AI integration. Many teachers reported limited confidence in selecting, evaluating, and managing AI tools. Without training in both technical and ethical aspects of AI, educators remain hesitant to rely on it for inclusive teaching. Studies from Jordan, Japan, and Nigeria showed that teachers struggled to interpret learning analytics dashboards, adapt AI outputs to student needs, or troubleshoot issues without technical support. This lack of preparedness weakened the potential benefits of AI tools, even when they were available.
Teacher concerns also extended to trust and perceived relevance. Some viewed AI as complex, opaque, or unreliable. Others feared that AI might dilute professional judgement or marginalize students who relied heavily on human interaction. In several cases, teachers expressed uncertainty about whether AI-generated feedback was accurate enough to support students with learning or behavioral challenges.
Institutional factors compounded these issues. Many schools lacked clear policy guidance on AI use, data protection, ethical boundaries, or curricular alignment. This led to fragmented adoption efforts, with some educators using AI informally while others avoided it entirely. Even when policies existed, they often failed to address the needs of students with disabilities, leaving teachers unsure of how to interpret guidelines for inclusive classrooms.
Despite these challenges, the review also identifies enablers that encourage adoption. When institutions provided structured training, reliable infrastructure, and consistent policy direction, teacher acceptance increased sharply. Educators responded positively to AI tools that were intuitive, accessible, and clearly aligned with pedagogical goals. Tools that offered clear benefits, such as reduced workload, improved student monitoring, or tailored instructional support, were more readily embraced. Collaborative environments, where teachers, parents, and administrators used AI-generated insights together, also strengthened adoption and improved inclusivity.
These findings highlight a common theme: AI adoption in inclusive education is less about technology and more about capacity-building, culture, and systemic readiness.
Theory-driven AI design shows stronger results but is used inconsistently
The review reveals a significant gap between AI’s technological capabilities and its pedagogical grounding. While many studies referenced theoretical frameworks, only a subset fully integrated them into AI design, implementation, and evaluation. When theories were used effectively, AI interventions became more inclusive, more engaging, and more responsive to learner diversity.
Self-Determination Theory (SDT) was one of the most impactful models. Studies applying SDT principles designed AI activities that supported students’ psychological needs for autonomy, competence, and relatedness. These interventions generated stronger motivation, reduced anxiety, and ensured that underrepresented or lower-achieving learners benefited equally from AI-supported instruction.
Other studies used the Technology Acceptance Model (TAM), its extended version TAM2, or the Unified Theory of Acceptance and Use of Technology (UTAUT) to examine adoption patterns. These models helped reveal how effort expectations, social influence, perceived usefulness, trust, and institutional support predicted whether teachers or students would use AI tools effectively. They also guided the design of interfaces that minimized cognitive load and frustration, making AI more accessible in inclusive settings.
The TPACK framework played a major role in studies evaluating AI integration with curriculum goals. Research on the MathU tutoring system demonstrated how blending technological, pedagogical, and content knowledge results in tools that genuinely support differentiated instruction. Sociocultural theory and epistemic network analysis also informed the design of conversational and collaborative AI applications, ensuring that learners could participate more equitably in critique, dialogue, and planning tasks.
However, many AI applications lacked any theoretical foundation, weakening their effectiveness. Without guiding principles, tools risked misalignment with learner needs, cultural context, or curriculum goals. The authors argue that theoretical models are essential for ethical, inclusive, and pedagogically sound AI design. They also support transparent evaluation, help educators anticipate challenges, and improve scalability across diverse educational systems.
- READ MORE ON:
- artificial intelligence in education
- inclusive education
- AI for special needs
- adaptive learning technologies
- intelligent tutoring systems
- AI accessibility tools
- teacher AI training
- AI adoption barriers
- AI adoption enablers
- educational technology frameworks
- Self-Determination Theory in education
- Technology Acceptance Model in education
- UTAUT in education
- TPACK framework
- AI for students with disabilities
- personalized learning AI
- inclusive classroom technology
- AI learning outcomes
- AI student engagement
- EdTech for inclusion
- FIRST PUBLISHED IN:
- Devdiscourse

