How AI is reshaping daily healthcare experiences for disabled users

Despite Saudi Arabia’s strides in digital health, accessibility remains uneven. Participants in the study used a diverse array of AI technologies, from smart wheelchairs and brain-computer interfaces (BCIs) to AI-powered navigation and smart home assistants, but all encountered obstacles rooted in geography, design, and device interoperability.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 08-07-2025 09:36 IST | Created: 08-07-2025 09:36 IST
How AI is reshaping daily healthcare experiences for disabled users
Representative Image. Credit: ChatGPT
  • Country:
  • Saudi Arabia

A new study published in Healthcare sheds light on the promise and the pitfalls of AI-assisted services for people with disabilities (PwDs) in Saudi Arabia. Their study, titled “Enhancing Healthcare for People with Disabilities Through Artificial Intelligence: Evidence from Saudi Arabia”, provides rare insight into user experiences with AI health technologies in a country undergoing rapid digital transformation under Vision 2030.

Based on interviews with nine individuals from Riyadh, Al-Jouf, and the Northern Border regions, the research outlines how AI tools are reshaping daily healthcare experiences for disabled users while also exposing major limitations in infrastructure, accessibility, personalization, and cultural fit.

Are AI tools truly accessible for people with disabilities?

Despite Saudi Arabia’s strides in digital health, accessibility remains uneven. Participants in the study used a diverse array of AI technologies, from smart wheelchairs and brain-computer interfaces (BCIs) to AI-powered navigation and smart home assistants, but all encountered obstacles rooted in geography, design, and device interoperability.

Geographic disparities emerged as a critical theme. While participants in Riyadh had better access to technical support and stable internet, those in rural areas faced weeks-long delays in maintenance and system failures due to poor connectivity. These shortcomings raise serious concerns for PwDs relying on real-time health data transmission for care continuity.

What makes the issue even worse is the lack of disability-specific design. For example, a participant with a speech impairment reported that their AI communication aid, supposedly tailored for their needs, required repeated retraining to understand their voice. Visual impairment tools were often incompatible with screen readers, while some navigation interfaces did not translate well to Arabic, highlighting a critical linguistic barrier in localized AI design.

Perhaps most striking was the evidence that AI technologies favored physical over cognitive or sensory disabilities. While participants with physical impairments benefited from advanced tools like AI-powered wheelchairs or BCIs, those with sensory or cognitive disabilities had access only to more basic solutions, such as voice-activated assistants with limited functionality. This uneven development exposes inequities in AI tool distribution that risk perpetuating existing disparities in healthcare access.

Do AI tools promote autonomy or reinforce dependence?

A key promise of AI in healthcare is to enhance patient autonomy, and the study reveals mixed success on this front. Some participants experienced profound independence through AI, especially those using BCIs to control assistive devices or smart systems to manage appointments and medication schedules.

But this independence is contextual. In Saudi Arabia’s family-centric culture, autonomy is not strictly individualistic. Many participants described a preference for technologies that supported their involvement in family-based decision-making, rather than replacing it. AI that respected these cultural dynamics, such as systems that facilitated shared healthcare discussions, was met with higher satisfaction.

Yet personalization failures often undermined these gains. One participant, for instance, highlighted how their smart home assistant could not distinguish between regular medication requests and emergency commands, posing serious safety risks. Another cited poor adaptation in AI rehabilitation programs when the system failed to adjust for fluctuating physical capabilities.

Participants preferred AI that enhanced their participation in decisions, not AI that made decisions for them. When healthcare providers ignored patients in favor of caregivers, despite the use of AI communication tools, users felt marginalized rather than empowered. These insights challenge assumptions about passive AI support and call for co-designed tools that reinforce the agency of PwDs.

What cultural and structural barriers hinder AI adoption?

Besides technical shortcomings, cultural and psychological factors also shaped AI adoption. Acceptance was not automatic; it evolved through personal trust, family approval, and cultural alignment.

Participants described initial anxiety, particularly with invasive tools like BCIs. Trust only developed through education, peer testimonials, and transparency in how AI systems made decisions. One participant emphasized the importance of understanding the biomechanical logic behind AI-driven rehabilitation routines, suggesting that explainability is crucial for adoption, especially among users with prior marginalization in healthcare.

Cultural tailoring emerged as another linchpin of successful implementation. Voice assistants that used traditional Arabic phrases were more readily accepted in households, indicating that even subtle cultural signals can improve comfort and trust. However, deeper integration, such as alignment with prayer schedules or family consultation norms, was largely absent and represents a missed opportunity in current AI design.

Religious compatibility also played a tacit role. While not always explicitly discussed, several participants expressed unease with mind-reading technologies or constant surveillance, hinting at the need for AI systems to be developed in consultation with religious authorities and local ethics bodies.

From a structural standpoint, digital literacy gaps and financial constraints were major hurdles. Even in urban areas like Riyadh, some users lacked the training to properly calibrate AI tools, and the high cost of device maintenance deterred consistent usage. Importantly, the study warns that without sustained infrastructure support and affordable service models, the push toward AI in healthcare could worsen rather than reduce inequities for PwDs.

Policy gaps and ethical imperatives

While digital platforms such as Sehhaty and Mawid represent progress, they remain insufficiently tailored to PwDs. Screen reader incompatibilities, absence of sign language features, and generic user interfaces fail to meet diverse needs.

Ethically, the research flags multiple red zones. Family-centric healthcare challenges Western notions of individual privacy and informed consent. AI systems often neglect this cultural nuance, creating conflicts between users’ preferences and developers’ assumptions. Additionally, algorithmic bias risks excluding users with multiple or less common disabilities, raising the need for context-specific fairness metrics.

Recommendations from the study include:

  • Establishing national accessibility standards for AI health technologies;
  • Expanding digital literacy programs tailored to disability types;
  • Providing subsidized access and maintenance support in rural areas;
  • Integrating cultural and religious values into AI system design;
  • Involving PwDs in co-design processes to ensure relevance and inclusivity.
  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback