Students with disabilities gain less from AI support tools
A recent study has found that university students with disabilities engage with AI tools less frequently and perceive them as less useful than their peers without disabilities, despite recognizing their potential value. The findings highlight a growing accessibility gap at a time when AI is being promoted as a scalable solution to academic and emotional support needs. The research underscores the risk that poorly designed AI systems could deepen digital exclusion in learning environments already struggling to accommodate diverse student needs.
The study, Artificial Intelligence and Emotional Support: A Comparative Study of University Students with and Without Disabilities, published in the journal Healthcare, is based on survey data from 358 university students in Spain.
AI becomes a new support layer in university life
Universities face rising student populations, budget constraints, and growing mental health needs, all while expectations for personalized support continue to rise. In this context, AI tools have emerged as a readily available alternative for students seeking information, reassurance, or help navigating academic demands.
The research examines three key dimensions of student interaction with AI: familiarity with AI tools, frequency of use, and perceived usefulness. These dimensions are assessed across both informational support, such as help with coursework, problem-solving, or understanding complex material, and emotional support, including stress management, encouragement, and perceived companionship.
Among students without disabilities, the study finds relatively high levels of familiarity and frequent use of AI tools. These students tend to view AI as a helpful complement to their academic work and, to a lesser extent, as a source of emotional reassurance. Frequent users are more likely to perceive AI as useful, suggesting that familiarity reinforces confidence and trust in these systems.
Students with disabilities, however, report a different experience. While many acknowledge that AI could offer meaningful support, particularly by providing flexible and on-demand assistance, they are less familiar with AI tools and use them less often. This lower level of engagement is accompanied by lower perceived usefulness across both informational and emotional domains.
The study identifies this gap as more than a matter of personal preference. Instead, it points to structural barriers related to accessibility, usability, and trust. Students with disabilities may encounter interfaces that are not compatible with assistive technologies, conversational systems that fail to accommodate cognitive or sensory differences, or outputs that are difficult to interpret without additional support. These obstacles reduce the likelihood that AI becomes a reliable or appealing resource.
Importantly, the research does not suggest that students with disabilities reject AI outright. Rather, it shows that current AI implementations often fail to meet their needs, limiting the technology’s effectiveness as an inclusive support tool.
Unequal benefits and the limits of increased use
Across the full sample, students who use AI tools more often tend to rate them as more useful. This pattern holds for both informational and emotional support, reinforcing the idea that familiarity breeds acceptance.
However, the strength of this relationship differs sharply between students with and without disabilities. Among students without disabilities, increased use is strongly associated with higher perceived usefulness. For students with disabilities, the relationship is significantly weaker. Even when these students engage with AI more frequently, the perceived benefits do not increase at the same rate.
This divergence suggests that simply encouraging greater use of AI is unlikely to close the gap. If underlying accessibility and design issues remain unaddressed, increased exposure may do little to improve outcomes for students with disabilities. In some cases, it could even exacerbate frustration or disengagement.
The study also highlights differences in how AI is perceived as an emotional support resource. Students without disabilities are more likely to view AI as capable of offering emotional reassurance, whether through conversational interaction, motivational feedback, or stress-related guidance. Students with disabilities, by contrast, are more cautious in assigning emotional value to AI systems, potentially reflecting concerns about reliability, appropriateness, or emotional nuance.
These findings raise broader questions about the role of AI in student wellbeing. While AI systems are often marketed as neutral or universally accessible tools, their design choices can privilege certain users over others. Emotional support, in particular, requires sensitivity to diverse communication styles, emotional cues, and support needs, areas where many current AI systems remain limited.
The research also validates a questionnaire designed to measure AI familiarity, frequency of use, and perceived usefulness. The instrument demonstrates strong reliability across both student groups, providing a standardized method for assessing AI engagement in educational contexts. This methodological contribution supports future comparative studies and longitudinal research on AI adoption in higher education.
Accessibility as the decisive factor for inclusive AI
The study’s conclusions emphasize that AI’s potential to support students depends less on technological sophistication than on inclusive design and institutional responsibility. Without deliberate attention to accessibility, AI risks becoming another layer of inequality in academic environments.
Students with disabilities often rely on assistive technologies, flexible formats, and clear interaction structures to engage effectively with digital tools. When AI systems are not designed with these requirements in mind, they create friction rather than support. The study suggests that this friction contributes to lower use and lower perceived usefulness among students with disabilities, despite their recognition of AI’s theoretical benefits.
The authors argue that universities and developers share responsibility for addressing these gaps. Institutions that promote AI tools without ensuring accessibility may inadvertently exclude the very students who could benefit most from additional support. Similarly, developers who prioritize speed, novelty, or general usability over inclusive design risk reinforcing systemic barriers.
The findings also challenge the assumption that AI can serve as a low-cost substitute for human support services. While AI may offer scalability, it cannot compensate for structural shortcomings in accessibility or institutional support. For students with disabilities, AI must be integrated into a broader ecosystem of inclusive education practices rather than deployed as a standalone solution.
From a policy perspective, the study points to the need for clear guidelines on accessible AI in higher education. This includes compatibility with assistive technologies, transparent communication about AI capabilities and limitations, and opportunities for student feedback in system design and evaluation. Without such measures, AI adoption may outpace universities’ ability to ensure equitable access.
- FIRST PUBLISHED IN:
- Devdiscourse

