AI in medicine: Experience shapes trust and adoption among clinicians
Clinicians with more than 10 years of practice consistently demonstrated higher acceptance of AI-assisted tools compared to their less experienced colleagues. These senior practitioners tended to view AI as an assistive tool rather than a disruptive force, integrating it into workflows to enhance diagnostic precision and procedural efficiency.
The growing integration of artificial intelligence (AI) in medical decision-making is reshaping clinical workflows across the globe, but acceptance of these tools remains uneven among practitioners.
A new study highlights the complex interplay between clinical experience, risk perception, and trust in AI applications. Published in Frontiers in Digital Health, the study titled “Clinical Experience and Perception of Risk Affect the Acceptance and Trust of Using AI in Medicine,” the research provides rare insights into how attitudes towards AI differ across levels of seniority among gastroenterologists in the Asia-Pacific region.
How experience and risk perception shape AI acceptance
The study surveyed 319 gastroenterologists and gastrointestinal surgeons from countries including Australia, Singapore, China, India, Japan, and South Korea, all of whom were trained in colonoscopy for colorectal cancer screening. The researchers used a cross-sectional design grounded in established behavioral models such as the Technology Acceptance Model and the Theory of Planned Behavior to explore how experience and perceived risk interact to influence attitudes and behaviors toward AI in clinical practice.
Results revealed that experience is the strongest predictor of AI acceptance. Clinicians with more than 10 years of practice consistently demonstrated higher acceptance of AI-assisted tools compared to their less experienced colleagues. These senior practitioners tended to view AI as an assistive tool rather than a disruptive force, integrating it into workflows to enhance diagnostic precision and procedural efficiency.
Risk perception emerged as an important secondary factor. Senior clinicians with low risk perception showed the highest levels of acceptance, while those with heightened concerns about AI safety were more cautious in their adoption. On the other hand, junior clinicians exhibited the opposite pattern: they were more likely to embrace AI when they perceived higher levels of risk, suggesting a reliance on AI as a safety net in complex or high-stakes cases. When perceived risks were low, younger practitioners were less inclined to incorporate AI into their decision-making, possibly reflecting uncertainty or a lack of confidence in interpreting AI-generated insights without significant oversight.
Attitudes, trust, and behavioral intentions
Beyond experience and risk perception, the study highlights the powerful role of attitudes in shaping behavior. Clinicians who viewed AI as valuable, reliable, and beneficial were more likely to integrate it into clinical workflows, regardless of their seniority. These findings underscore the importance of promoting positive attitudes toward AI technologies to drive broader adoption.
The research also observed that the complexity of AI applications did not significantly alter acceptance patterns. Whether AI was applied to basic detection of polyps, advanced characterization, or complex therapeutic guidance, the key determinant of acceptance remained the clinician’s experience and attitude. This suggests that strategies to promote AI adoption should focus less on the technical complexity of applications and more on building user confidence and trust.
The interaction between experience and perceived risk adds a layer of nuance to these findings. Senior clinicians, confident in their expertise, are more likely to integrate AI tools when they see them as low-risk support systems, enhancing efficiency without compromising clinical judgment. For junior clinicians, limited experience may make them hesitant to fully trust AI outputs, especially in lower-risk scenarios, where the perceived value of AI may not justify the learning curve or the potential for workflow disruption.
Implications for training and implementation
The findings point to several actionable recommendations for healthcare organizations, policymakers, and developers aiming to accelerate AI adoption in clinical environments. First, training programs should be tailored to address the distinct needs of clinicians at different experience levels. For junior practitioners, education should focus on building confidence in interpreting AI outputs and understanding how these tools can enhance, rather than replace, their clinical decision-making capabilities.
Second, senior clinicians should be engaged as champions of AI integration. Their higher acceptance and trust can play a critical role in mentoring junior colleagues, fostering a culture of collaborative learning, and facilitating smoother implementation of AI systems across departments.
Next up, clear communication about the benefits and limitations of AI is essential to bridge gaps in perception and build trust. Institutions should implement transparent validation processes, offer hands-on demonstrations, and create feedback mechanisms to continuously refine AI tools in ways that align with clinicians’ real-world needs.
The study also notes that while the effect sizes of experience and risk perception were modest, they were consistent across scenarios, indicating that these factors are deeply embedded in the clinical adoption process.
- FIRST PUBLISHED IN:
- Devdiscourse

