Healthcare AI must be explainable to gain clinician confidence
The review documents a sharp increase in healthcare XAI research since 2018, reflecting the surge of machine learning adoption in diagnostics, clinical support, and digital health platforms. Thematic mapping reveals that interpretability, explainability, transparency, and trustworthiness are the central concerns driving this field. These priorities highlight a clear shift from algorithmic performance alone toward usability and clinician confidence.
Artificial intelligence has transformed healthcare decision-making, but clinicians remain skeptical of complex algorithms they cannot fully understand. A new study published in Algorithms argues that rule-based systems could provide the missing link between advanced AI models and clinical accountability.
The research, Reclaiming XAI as an Innovation in Healthcare: Bridging Rule-Based Systems, examines how explainable artificial intelligence (XAI) can be integrated into healthcare by reviving and modernizing rule-based logic. The paper analyzes 654 publications indexed in Scopus between 2018 and May 2025, using scientometric mapping and PRISMA screening to trace trends, gaps, and future directions for explainable AI in medicine.
How explainable AI research has evolved in healthcare
The review documents a sharp increase in healthcare XAI research since 2018, reflecting the surge of machine learning adoption in diagnostics, clinical support, and digital health platforms. Thematic mapping reveals that interpretability, explainability, transparency, and trustworthiness are the central concerns driving this field. These priorities highlight a clear shift from algorithmic performance alone toward usability and clinician confidence.
The authors identify hybrid models, those combining deep learning with rule-based systems, as particularly significant. While black-box models like deep neural networks provide accuracy, they fall short in interpretability. Rule-based layers, however, can reintroduce human-readable logic that physicians can scrutinize, validate, and trust. This blend helps address regulatory requirements and bridges the communication gap between AI systems and practitioners.
Scientometric analysis also pinpoints geographic and institutional leaders, with the United States, China, and European research centers dominating output. Collaborative networks are expanding, but the study notes an uneven spread, with limited contributions from low- and middle-income regions despite their pressing healthcare challenges.
What gaps threaten effective AI adoption
Despite progress, the review highlights persistent gaps that could hinder the real-world adoption of XAI. A key challenge is the lack of standardized frameworks for explainability. Without agreed-upon metrics or models, healthcare providers cannot consistently evaluate whether an AI system meets clinical or ethical standards.
The authors’ agenda-setting analysis identifies four categories of themes. In the high-impact, low-gap cluster are studies on interpretability and accountability, which are ready to scale. In the high-impact, high-gap cluster are responsible AI topics such as trust, usability, and ethics, where more research and practice are urgently needed. Low-impact, high-gap areas include digital health, mHealth, and telemonitoring, which remain fragmented and underdeveloped. Finally, low-impact, low-gap areas like hybrid AI-healthcare research are more mature and already producing incremental advances.
Another barrier lies in the opacity of model outputs. Even when AI tools integrate post-hoc interpretability techniques like SHAP or LRP, physicians often struggle to reconcile these outputs with established clinical reasoning. This creates friction in workflows and undermines clinician willingness to rely on AI recommendations.
The review also stresses the ethical and social risks of unexplainable AI. In high-stakes fields such as oncology or cardiology, the absence of transparency could have direct consequences for patient safety and institutional liability. Furthermore, the digital divide in research contributions raises concerns that global health systems will not benefit equally from advances in explainable AI.
What rule-based systems offer for the future of healthcare AI
The study calls rule-based systems as a practical way forward for making AI accountable and clinically relevant. These systems operate on clear if-then logic, mirroring the structured reasoning physicians already apply in diagnosis and treatment. When embedded into hybrid AI models, rule-based components ensure that outputs are auditable and align with professional norms.
According to the authors, this approach does not mean a return to older, less accurate expert systems. Instead, rule-based logic can function as an explanatory layer within advanced architectures, helping clinicians trace how an algorithm reached a decision. This makes AI more trustworthy and reduces resistance to adoption.
The review calls for policy frameworks that encourage the integration of rule-based modules, ensuring that interpretability becomes a standard requirement in medical AI tools. Training programs for healthcare professionals are also emphasized, equipping them to understand and evaluate the logic behind AI-driven recommendations.
The authors recommend several future directions: longitudinal studies on trust and adoption, especially in clinical settings; greater focus on low- and middle-income countries where digital health needs are acute; and mixed-methods research to assess how explainability actually improves patient care outcomes. By combining empirical evidence with practical implementation strategies, these initiatives could help solidify explainable AI as a cornerstone of digital medicine.
- FIRST PUBLISHED IN:
- Devdiscourse

