Who is responsible when AI influences medical decisions?
With artificial intelligence (AI) systems gaining influence over clinical decisions, a key question is becoming harder to avoid: who is ethically responsible when AI shapes medical outcomes? New research published in the journal Healthcare suggests that while technical performance has advanced quickly, ethical governance has not kept pace, leaving healthcare systems exposed to responsibility gaps that could undermine trust, safety, and legitimacy.
The study Ethical Responsibility in Artificial Intelligence for Healthcare: A Systematic Review and Multilevel Governance Framework, finds that responsibility for AI-driven decisions is widely diffused across clinicians, institutions, developers, and regulators, often without clear mechanisms for accountability. The authors argue that without coordinated ethical governance across all levels of the healthcare system, even well-designed AI tools risk creating new forms of harm.
Ethical debate outpaces practical accountability
The study is based on a semi-systematic, theory-informed thematic review of 187 peer-reviewed publications published between 2020 and 2025. Using PRISMA 2020 guidelines, the authors analyze how ethical responsibility is discussed across the medical AI literature, identifying dominant themes and neglected areas. The findings reveal an uneven ethical landscape, heavily weighted toward abstract principles but light on enforceable responsibility structures.
Transparency and explainability emerge as the most frequently discussed ethical issues, accounting for a substantial share of the literature. Researchers emphasize the need for clinicians and patients to understand how AI systems generate recommendations, particularly in high-stakes settings such as oncology, cardiology, and emergency care. However, the study finds that transparency is often treated as an end in itself rather than as a means to accountability.
Other ethical dimensions receive significantly less attention. Patient autonomy, professional responsibility, and data privacy appear comparatively underrepresented, despite their central importance to clinical practice. This imbalance, the authors argue, creates blind spots in how ethical risks are anticipated and managed. When AI recommendations influence care pathways, responsibility for outcomes can become blurred, especially if clinicians rely on systems they did not design, train, or fully understand.
The review also highlights a recurring tension between innovation and regulation. Many studies acknowledge that regulatory frameworks lag behind technological development, but few offer concrete proposals for closing this gap. As a result, ethical responsibility is often framed as an individual clinician’s burden rather than as a shared institutional and societal obligation.
A multilevel model for ethical responsibility in medical AI
The researchers have developed a multilevel ethical responsibility framework designed to address the diffusion of accountability in medical AI. The model organizes responsibility across three interconnected levels: micro, meso, and macro.
At the micro level, responsibility lies with clinicians and healthcare professionals who interact directly with AI systems. This includes duties related to informed consent, appropriate use, and critical oversight of algorithmic recommendations. The study emphasizes that clinicians should not be reduced to passive executors of AI outputs, but should retain professional judgment and accountability for patient care.
The meso level encompasses healthcare institutions, hospitals, and organizations that procure, deploy, and manage AI systems. Here, ethical responsibility involves governance structures, training programs, validation processes, and internal audit mechanisms. Institutions are responsible for ensuring that AI systems are fit for purpose, aligned with clinical workflows, and monitored throughout their lifecycle.
At the macro level, responsibility extends to regulators, policymakers, professional bodies, and technology developers. This includes setting standards for safety, transparency, and accountability, as well as enforcing compliance through certification, reporting, and redress mechanisms. The study argues that without strong macro-level governance, ethical responsibility becomes fragmented and reactive rather than proactive.
Crucially, the framework distinguishes between ex ante and ex post responsibilities. Ex ante responsibilities focus on ethical design, risk assessment, validation, and stakeholder involvement before AI systems are deployed. Ex post responsibilities address accountability after deployment, including mechanisms for audit, explanation, liability, and patient redress when harm occurs. The authors stress that ethical governance must span the entire AI lifecycle, not just the moment of clinical use.
Trust, governance, and the future of medical AI
While tools such as explainable models, bias mitigation techniques, and privacy-preserving methods are important, they do not resolve deeper governance challenges. Trustworthy AI, the authors argue, depends on clear responsibility allocation, institutional oversight, and regulatory coherence.
Responsibility gaps can erode trust even when AI systems perform well. Patients may accept AI-assisted care if they believe there are clear lines of accountability when things go wrong. Conversely, uncertainty about who is responsible can undermine confidence in both technology and healthcare institutions.
The study also highlights the evolving role of medical professionals. As AI systems become more integrated into diagnosis and treatment planning, clinicians face new ethical pressures. The research calls for strengthened AI literacy among healthcare professionals, enabling them to understand system limitations, challenge outputs, and communicate risks effectively to patients.
Policy implications are significant. The authors argue that fragmented ethical guidelines and voluntary codes are insufficient for systems that increasingly shape medical decisions. They call for mandatory algorithmic audits, standardized reporting requirements, and clearer liability frameworks that reflect the distributed nature of AI development and use.
- FIRST PUBLISHED IN:
- Devdiscourse

