Beyond algorithms: Trust emerges as key barrier in healthcare AI adoption
A growing body of research suggests that the success of healthcare AI systems hinges not on technical performance alone but on a far more complex and contested factor: trust. A new interdisciplinary study challenges conventional assumptions about trust in healthcare AI, arguing that it cannot be engineered as a fixed attribute but must be understood as a dynamic and socially embedded process.
Published in AI & Society, the study titled “Between Map and Maze: Reframing Trust in Healthcare AI” maps how trust, trustworthiness, distrust, and mistrust are conceptualized across disciplines and highlights critical gaps in how these ideas are applied in real-world healthcare settings.
While trust is widely recognized as essential for the adoption of AI in healthcare, the concept itself remains fragmented, inconsistently defined, and often detached from clinical realities. Rather than a stable condition that can be designed into systems, trust emerges as a negotiation shaped by institutional structures, professional norms, and social relationships.
Fragmented views of trust shape AI adoption in healthcare
The study identifies five dominant ways in which trust is conceptualized in healthcare AI research, revealing a deeply fragmented intellectual landscape.
The most common approach frames trust as a set of designable principles embedded within AI systems. In this view, trustworthiness is treated as a measurable property achieved through attributes such as transparency, explainability, data quality, and validation. These principles are often presented as checklists or guidelines, suggesting that adherence to technical standards can produce trustworthy systems.
However, the study finds that meeting these criteria does not guarantee that healthcare professionals will actually trust the system. A model may be technically explainable yet fail to align with clinical reasoning or practical decision-making. This disconnect exposes the limits of principle-based approaches, which assume that trust can be reduced to system attributes rather than lived experience.
A second group of studies shifts the focus from systems to users, conceptualizing trust as a belief or attitude. Here, trust depends on whether healthcare professionals are willing to rely on AI outputs, often influenced by their confidence in the system’s capabilities. This perspective introduces the idea that trust is an active decision, shaped by individual perceptions and willingness to accept vulnerability.
Yet this approach also raises critical concerns. By emphasizing user responsibility, it risks placing the burden of trust on clinicians rather than on the institutions and developers responsible for AI design. It also overlooks how trust is formed in practice, where decisions are influenced by organizational culture, professional hierarchies, and situational pressures.
A third cluster presents binary distinctions, such as cognitive versus affective trust or trust versus reliability. These frameworks attempt to separate rational evaluation from emotional response or human judgment from machine performance. While analytically appealing, the study argues that such distinctions oversimplify the realities of healthcare, where decision-making is inherently complex and intertwined with social and institutional factors.
The fourth perspective draws on sociological and philosophical theories, framing trust as a structural mechanism that enables action under uncertainty. In healthcare, where decisions often carry high stakes and incomplete information, trust functions as a necessary condition for moving forward. This view acknowledges the relational nature of trust but often remains abstract, focusing on generalized models rather than concrete clinical contexts.
The fifth and most critical perspective situates trust within socio-technical systems, emphasizing its relational and context-dependent nature. Here, trust is seen as co-produced through interactions between clinicians, patients, institutions, and technologies. This approach highlights how trust is shaped by power dynamics, professional norms, and broader social structures.
Despite its depth, this relational perspective remains underrepresented in the literature. The study finds that the majority of research continues to focus on technical or individual dimensions of trust, leaving broader social dynamics insufficiently explored.
Distrust, mistrust, and overtrust remain underexplored
Apart from mapping conceptualizations of trust, the study highlights a major blind spot in current research: the limited attention given to trust disruptions, including distrust, mistrust, and inappropriate levels of trust.
Much of the existing literature treats distrust as a problem to be fixed, framing it as a barrier to AI adoption. However, the study argues that distrust can be analytically valuable, offering insights into the underlying tensions and uncertainties that shape interactions with AI systems.
The concept of overtrust emerges as a particularly significant risk. In clinical settings, healthcare professionals may rely too heavily on AI outputs, assuming objectivity and accuracy without sufficient scrutiny. This phenomenon, often linked to automation bias, can lead to uncritical acceptance of recommendations, potentially compromising patient safety.
On the other hand, undertrust can result in the rejection of useful AI tools, limiting their potential benefits. Both extremes highlight the need for what researchers describe as appropriate or calibrated trust. Yet the study questions whether such calibration can be achieved through purely rational processes, given the complex social and emotional factors involved.
Mistrust introduces another layer of complexity. Unlike distrust, which is often framed as a lack of confidence, mistrust can reflect deeper issues such as perceived betrayal or systemic inequality. In some cases, mistrust is rooted in historical experiences and social contexts, particularly among marginalized groups who may view AI systems with suspicion.
The study also identifies skepticism as a contested concept. While some researchers see it as an obstacle to overcome, others argue that skepticism plays a critical role in preventing blind reliance on AI. This divergence reflects broader tensions in how trust and its disruptions are understood within the field.
Trust and distrust are not opposites but interconnected processes. Trust is not simply present or absent; it is continuously negotiated and reshaped through interactions and experiences. This perspective challenges linear models of AI adoption that assume trust can be built and maintained in a straightforward manner.
Rethinking trust as a dynamic and relational process
Current approaches often treat trust as a prerequisite for adoption, focusing on how to increase user confidence in AI systems. However, the research argues that this perspective overlooks the broader socio-technical context in which trust is formed and contested.
Healthcare environments are characterized by complex interactions between professionals, patients, institutions, and technologies. Trust in AI is shaped by these interactions, influenced by factors such as professional expertise, organizational culture, and regulatory frameworks. The study suggests that attempts to engineer trust through technical features or guidelines are inherently limited. While principles such as transparency and explainability are important, they cannot capture the full range of factors that influence trust in practice.
Instead, trust should be understood as an ongoing negotiation, shaped by power relations, uncertainties, and institutional dependencies. This perspective shifts the focus from designing trustworthy systems to examining how trust is enacted and distributed within healthcare settings.
The research also highlights the importance of qualitative approaches, such as ethnographic studies and case analyses, in understanding trust dynamics. These methods can provide deeper insights into how trust is experienced and negotiated in real-world contexts, complementing technical and quantitative approaches.
For policymakers and developers, the findings call for more context-sensitive and inclusive approaches to AI governance. This includes engaging with diverse stakeholders, addressing structural inequalities, and recognizing the limitations of one-size-fits-all solutions.
The study further warns that dominant conceptualizations of trust can marginalize alternative perspectives, shaping how AI systems are designed and evaluated. When trust is defined primarily in technical terms, it may fail to align with the lived realities of healthcare professionals and patients.
- FIRST PUBLISHED IN:
- Devdiscourse

