AI in healthcare hits trust barrier as clinicians call for explainability and shared liability
Individually, many doctors report that they simply do not have access to AI tools in their workplaces. Even when tools are available, some clinicians feel they lack adequate training to use them effectively. Uncertainty around how AI recommendations are generated also undermines trust. Clinicians express concern about systems developed without sufficient medical input, as well as outputs that are difficult to interpret or explain.
New research published in Applied Sciences explores how frontline professionals see the future of AI in hospitals. While enthusiasm is growing for AI’s ability to speed up workflows, improve diagnostic consistency, and support clinical decision-making, medical professionals remain firm that automated systems must serve strictly as augmentative tools, not as replacements for human judgment.
The findings come from the study “Augmenting, Not Replacing: Clinicians’ Perspectives on AI Adoption in Healthcare” that surveyed 193 clinicians and medical physicists from imaging-related specialties to track how AI is being used, why adoption remains slow, and where doctors see the biggest opportunities and threats.
Clinicians expect faster, more accurate workflows, but still prioritize human oversight
Across the surveyed population, clinicians widely agree that AI will play an increasingly important role in diagnosis, imaging, and workflow management. Many respondents believe that AI will help them perform tasks faster and with higher accuracy, particularly younger specialists with fewer than ten years of experience. Older clinicians also anticipate efficiency gains, although with slightly more caution.
Doctors identify several areas where AI offers the strongest value: improvements in diagnostic tools, assistance in forming accurate diagnoses, support in therapy personalization and treatment monitoring, and enhancements in telemedicine. These expectations align with broader trends in medical computing, where AI is helping to analyze images, flag abnormalities, sort health data, and reduce the burden of repetitive tasks.
However, the study finds that this optimism comes with firm limits. The central theme emerging from respondents is that AI should only augment physicians’ work. Its role is to offer automated second opinions, pre-analyze data, and support early detection, while human professionals retain responsibility for clinical judgment and final decisions.
Clinicians stress that AI cannot replace human expertise, clinical intuition, or the ethical responsibilities inherent to patient care. This belief holds across all specialties represented in the survey.
Persistent barriers: Lack of access, limited training, and low trust slow real adoption
Despite strong interest in AI’s potential, real-world adoption in Italian hospitals remains low. The study highlights a cluster of structural and practical obstacles affecting individual clinicians, organizational systems, and institutional frameworks.
Individually, many doctors report that they simply do not have access to AI tools in their workplaces. Even when tools are available, some clinicians feel they lack adequate training to use them effectively. Uncertainty around how AI recommendations are generated also undermines trust. Clinicians express concern about systems developed without sufficient medical input, as well as outputs that are difficult to interpret or explain.
Organizational barriers add another layer. Hospitals with strong research cultures or innovation-oriented leadership tend to adopt AI tools more effectively. By contrast, smaller or resource-constrained facilities, including many public hospitals, often struggle due to limited funding, conservative management, or shortages of skilled personnel capable of handling AI systems.
Institutional and legal barriers further complicate adoption. The study notes that many clinicians have limited awareness of major regulatory frameworks, including the EU AI Act. Legal uncertainties, especially around responsibility when AI errors occur, create hesitation among hospitals and practitioners alike.
These challenges show why AI’s technical capabilities alone are insufficient to guarantee widespread adoption. Culture, governance, and training infrastructure are equally important.
Explainability, accountability, and ethical boundaries shape doctors’ views of AI’s future
Doctors call for explainable AI, both to build trust and to ensure safe clinical application. Preferences differ by speciality: imaging professionals favor visual explanations that highlight relevant regions of diagnostic images, while clinicians in other fields prefer textual or example-based explanations.
When asked how they respond to AI outputs that contradict their own judgment, doctors do not blindly accept or reject the result. Instead, the majority investigate discrepancies to understand underlying causes. Some seek synergy between human and AI reasoning, while others revisit processes on both sides to determine the source of the inconsistency.
The question of legal responsibility also reveals mixed views. Many clinicians believe liability should be shared among developers, hospitals, and users, reflecting the complex ecosystem in which medical AI operates. Very few respondents think patients should hold responsibility when AI is used in their care.
Concerns about AI’s risks remain present. Clinicians highlight worries including lack of moral agency, potential damage to clinician–patient trust, and the possibility of reinforcing existing inequities through biased training data or poor system design.
- READ MORE ON:
- AI in healthcare
- clinician perspectives
- medical AI adoption
- explainable AI
- hospital workflow automation
- diagnostic AI tools
- augmented intelligence in medicine
- clinician–AI collaboration
- healthcare technology barriers
- EU AI Act awareness
- medical liability AI
- AI trust in hospitals
- clinical decision support systems
- FIRST PUBLISHED IN:
- Devdiscourse

