AI adoption in healthcare is slower than expected: Here's why
The real-world adoption of artificial intelligence in the healthcare sector remains uneven due to deep-rooted gaps between developers and frontline clinicians, according to new research published in Digital Health. The study highlights that while AI holds promise to ease workforce shortages, improve diagnostics, and streamline clinical workflows, its success depends less on technological capability and more on trust, collaboration, and shared understanding between key stakeholders.
Titled “Bridging perspectives: Success factors for AI implementation in healthcare from healthcare professionals and AI experts,” the study examines how differing priorities between healthcare professionals and AI experts are shaping the trajectory of AI adoption, revealing both critical alignments and persistent disconnects that could determine the future of digital health systems.
Trust, transparency, and collaboration emerge as decisive success factors
Transparency is the foundation of trust, and trust is the foundation of AI adoption in healthcare. Both healthcare professionals and AI experts identified transparency in how AI systems generate results as the single most critical factor influencing acceptance.
Clinicians consistently expressed the need to understand how AI arrives at its conclusions, particularly in high-stakes environments such as diagnosis and treatment planning. While they did not require full technical explanations, they emphasized the importance of interpretable outputs that allow them to validate recommendations. This demand reflects ongoing concerns about the “black box” nature of many AI models, especially neural networks, which can produce accurate results without offering clear reasoning.
AI experts acknowledged this concern but showed varying perspectives. Some emphasized the importance of explainability, while others argued that strong clinical evidence of performance should be sufficient to build confidence. This divergence highlights a fundamental tension between technical validation and user trust, with clinicians prioritizing interpretability over abstract performance metrics.
In addition to transparency, the study highlights the importance of early and continuous collaboration between stakeholders. Both groups agreed that involving healthcare professionals during the development phase significantly improves the relevance and usability of AI tools. Early engagement allows developers to better understand clinical workflows, identify real-world needs, and avoid creating solutions that fail to integrate into practice.
However, despite this shared recognition, practical collaboration remains uneven. AI experts often cited limited availability and engagement from clinicians, while healthcare professionals pointed to insufficient communication and a lack of meaningful involvement in development processes. These differences reveal that while collaboration is widely valued, it is not yet effectively operationalized.
Interorganizational cooperation also emerged as a critical enabler. Successful implementation depends on coordination between hospitals, developers, regulators, and other stakeholders. Where such cooperation is strong, AI adoption tends to be smoother and more sustainable. Where it is weak, systems face resistance, delays, or outright abandonment.
Gaps in understanding, training, and value perception hinder adoption
While areas of alignment exist, the study identifies several critical gaps that continue to hinder AI implementation. Chief among these is a lack of mutual understanding between healthcare professionals and AI experts.
AI developers often struggle to fully grasp the complexities of clinical environments, including workflow constraints, patient variability, and decision-making processes. At the same time, many clinicians lack awareness of AI capabilities and limitations, leading to unrealistic expectations or skepticism. This disconnect creates a feedback loop in which poorly aligned tools reinforce mistrust and limit adoption.
The study highlights the growing need for interdisciplinary expertise to bridge this divide. Professionals who can operate at the intersection of medicine and technology are increasingly seen as essential for translating clinical needs into technical solutions and vice versa. Without such roles, communication gaps are likely to persist, slowing progress and increasing implementation risks.
Training also emerged as a major challenge. Healthcare professionals emphasized the need for structured education on how AI tools work, what they can and cannot do, and how to integrate them into clinical practice. However, many reported insufficient support from developers, with training responsibilities often shifting to healthcare organizations.
AI experts, on the other hand, pointed to difficulties in delivering effective training, particularly given the time constraints and varying levels of technical literacy among clinicians. This mismatch highlights a broader issue in implementation strategy, where education is recognized as essential but remains underdeveloped in practice.
Another key area of divergence lies in the perception of value. While both groups agree that AI should deliver tangible benefits, their expectations differ significantly. AI experts tend to focus on automating routine tasks and improving efficiency, viewing these as primary drivers of value. Healthcare professionals, however, expect a broader impact, including decision support, improved patient outcomes, and clear clinical evidence of effectiveness.
This gap in expectations can lead to dissatisfaction, particularly when tools fail to meet clinical needs. Some clinicians expressed concern that developers prioritize economic or technical considerations over practical usability and patient care outcomes. At the same time, AI experts noted that clinicians may not always articulate their needs clearly, complicating the design process.
Usability further complicates this dynamic. Both groups agree that ease of use is critical, particularly in high-pressure healthcare environments. However, clinicians reported that usability is often overlooked during development, leading to tools that are technically advanced but difficult to integrate into daily workflows. Poor usability can result in abandonment, even when systems offer potential benefits.
Data, responsibility, and system integration define future trajectory
The study also sheds light on structural challenges related to data, responsibility, and system integration, all of which play a central role in shaping AI adoption.
Data quality and privacy emerged as significant concerns among healthcare professionals. Clinicians highlighted issues related to the use of external datasets for training AI models, raising questions about reliability, bias, and applicability to local populations. Concerns about patient data security and regulatory compliance further complicate data sharing, limiting access to the large datasets required for effective AI development.
AI experts generally downplayed these challenges in the context of collaboration, suggesting a disconnect in how data-related risks are perceived. This difference underscores the importance of aligning technical practices with clinical expectations, particularly in areas involving sensitive patient information.
Responsibility for decision-making represents another key tension. Both groups agreed that final clinical decisions should remain with healthcare professionals, reflecting regulatory frameworks and ethical considerations. However, differences emerged regarding the level of AI autonomy.
While AI experts emphasized clinician control, some healthcare professionals expressed openness to full automation in specific tasks, such as routine measurements in radiology. This suggests that acceptable levels of autonomy may vary depending on context, with greater automation possible in low-risk or repetitive tasks.
Integration into existing workflows also plays a decisive role. AI tools that require significant changes to established practices face resistance, while those that align with current systems are more likely to be adopted. The study highlights the importance of designing AI solutions that fit seamlessly into clinical routines, minimizing disruption and maximizing usability.
At an organizational level, readiness for change varies. Some healthcare professionals reported adaptability to new technologies, while others emphasized the need for clear benefits before altering workflows. This reinforces the importance of demonstrating value early in the implementation process.
Toward a collaborative framework for sustainable AI adoption
The study proposes a framework centered on early collaboration, shared understanding, and continuous adaptation. The framework emphasizes the need to involve healthcare professionals from the earliest stages of development, ensuring that AI tools are aligned with real-world needs and constraints.
Improving communication between stakeholders is identified as a key priority. This includes not only clearer explanations of AI capabilities but also mechanisms for ongoing feedback and iterative development. Interdisciplinary roles are highlighted as a critical component, helping to bridge the gap between technical and clinical domains.
The framework also underscores the importance of defining the scope of use for AI tools. Clear boundaries regarding what AI can and cannot do help manage expectations, clarify responsibilities, and reduce the risk of misuse. Combined with transparency in outputs, this clarity supports trust and encourages adoption.
Training is positioned as a central pillar of implementation. Providing healthcare professionals with the knowledge and skills to use AI effectively is essential for both safety and acceptance. At the same time, developers must gain a deeper understanding of clinical contexts to design tools that meet user needs.
Last but not least, the study calls for a stronger focus on demand-side value. AI systems must deliver measurable benefits in clinical practice, whether through time savings, improved outcomes, or enhanced decision-making. Demonstrating this value is critical for securing buy-in from healthcare professionals and ensuring long-term sustainability.
- READ MORE ON:
- AI in healthcare
- healthcare AI implementation
- artificial intelligence healthcare adoption
- clinical decision support AI
- healthcare digital transformation
- AI trust and transparency healthcare
- healthcare professionals AI collaboration
- AI data privacy healthcare
- healthcare AI challenges
- NASSS framework healthcare
- FIRST PUBLISHED IN:
- Devdiscourse

