AI acceptance in healthcare isn’t just about function - it’s about identity and empathy

The research highlights a strong public interest in AI-based healthcare tools, with over 85% of respondents having used a voice assistant and more than 82% currently owning one. However, only 25.8% reported daily use. Crucially, the study found that willingness to use VAs for healthcare was highly dependent on NHS endorsement, which increased adoption odds sixfold.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 22-04-2025 18:02 IST | Created: 22-04-2025 18:02 IST
AI acceptance in healthcare isn’t just about function - it’s about identity and empathy
Representative Image. Credit: ChatGPT

A new study reveals the complex psychological and demographic factors shaping public willingness to adopt artificial intelligence-enabled voice assistants (VAs) and digital AI humans (DHs) in healthcare. Published in AI & Society with the title "Exploring Acceptability of AI‑Enabled Voice Assistants and Digital AI Humans in Healthcare: A Cross‑Sectional Survey," the University of Westminster-led research surveyed 472 UK adults and offers unprecedented insight into who embraces AI in healthcare and why.

At a time when AI technologies are poised to reshape patient-provider interactions, the study identifies both catalysts and barriers to acceptance. Institutional trust, cultural identity, and personality traits emerged as pivotal predictors of user attitudes. While a majority of respondents were familiar with VAs and moderately aware of DHs, actual day-to-day usage was low - indicating a gap between familiarity and comfort.

What drives acceptance of voice assistants in healthcare?

The research highlights a strong public interest in AI-based healthcare tools, with over 85% of respondents having used a voice assistant and more than 82% currently owning one. However, only 25.8% reported daily use. Crucially, the study found that willingness to use VAs for healthcare was highly dependent on NHS endorsement, which increased adoption odds sixfold.

Demographic variables played a notable role. Women, ethnic minorities, and individuals with lower education levels were significantly less likely to use VAs. Digital habits also mattered: participants who seldom searched for health information online were far less inclined to accept VAs, whereas those engaged in online health discussions showed much greater openness. The perception of usefulness and security also affected outcomes, participants were more likely to embrace VAs when they deemed them effective, easy to use, and safe.

Psychological disposition emerged as a potent influence. Among the Big Five personality traits, only openness remained a statistically significant predictor of VA acceptance in adjusted models. Those scoring high in openness were more than 75% more likely to embrace voice assistants in a healthcare setting.

Who is likely to accept digital AI humans and why?

Digital AI humans, despite being less commonly encountered than voice assistants, received generally favorable attitudes. Awareness stood at 70.3%, with a median acceptance score of 2.17 on a five-point scale, where lower scores signified greater favorability. Yet unlike VAs, the strongest predictors of DH acceptance were not just prior exposure or perceived utility - they were deeply tied to cultural identification and personality dynamics.

The most prominent factor shaping DH attitudes was the perceived importance of face-to-face interaction in healthcare. Participants who valued in-person consultations expressed more favorable views toward DHs, likely because of their human-like interfaces. Engagement in online health discussions also boosted acceptance, suggesting that individuals already comfortable with digital health information exchange are more receptive to DHs.

From a psychological standpoint, conscientiousness and low neuroticism were the strongest predictors of positive DH attitudes. In contrast, openness, which was decisive for VA acceptance, was not statistically significant for DHs. This suggests DHs may resonate more with individuals who prefer structure and emotional stability over novelty and flexibility.

Ethnicity was another key determinant. White/Irish/European participants were significantly more accepting of DHs than ethnic minorities. This finding reinforces earlier studies showing that avatar representation, particularly matching users’ ethnic backgrounds, enhances trust and acceptance. When DHs appear culturally aligned, users, especially those from underserved communities, are more likely to engage meaningfully.

How can these technologies bridge or widen healthcare gaps?

The findings underscore the dual potential of AI-enabled conversational agents to both bridge and widen healthcare inequities. While AI technologies like VAs and DHs promise scalable, low-cost, and personalized healthcare delivery, their adoption is not universal. Demographic and psychological differences reveal critical fractures in public readiness.

The study affirms the Technology Acceptance Model’s core tenets: perceived usefulness and ease of use are paramount for adoption. However, it also expands on this framework by integrating personality dimensions and cultural representation. Openness to innovation may drive VA adoption, but conscientiousness and a preference for human interaction are what win over DH users.

Designing these technologies to be culturally responsive could be a game-changer. Developers are encouraged to consider ethnicity-based customization, voice identity options, and culturally adaptive features. These inclusions could significantly improve engagement across groups that historically show lower acceptance, namely women, ethnic minorities, and individuals with limited education or digital exposure.

Policy implications are equally clear. Institutional endorsement, particularly from public health bodies like the NHS, significantly boosts trust and intent to use AI in healthcare. By officially endorsing or co-developing AI agents, healthcare systems can instill the confidence necessary to reach more hesitant users.

The study’s limitations include reliance on proxy measures such as "the willingness to use" rather than real-world adoption, and the predominance of White/European respondents, which limits the generalizability of ethnic subgroup findings. Still, this work marks a critical first in linking the Big Five personality model with AI acceptance in healthcare, charting a new frontier for future research.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback