Healthcare at risk: Lack of AI explainability fuels over-reliance on machines
The study highlights that explainability changes not just legal outcomes but also behavioral incentives. Transparent systems create accountability loops where doctors remain central decision-makers, while opaque systems risk transferring liability and trust toward manufacturers and their algorithms.
A group of researchers has found that the explainability of artificial intelligence systems could reshape liability rules in healthcare, influencing how doctors, manufacturers, and policymakers assign responsibility for medical decisions. The team examined how opaque and transparent AI systems create different incentives for medical professionals and industry stakeholders.
The study published on arXiv, “Explainability matters: The effect of liability rules on the healthcare sector,” uses legal analysis and game-theoretic modeling to compare scenarios where AI systems operate as black boxes versus cases where they provide transparent, interpretable outputs.
How does explainability affect liability in healthcare?
The researchers focus on two archetypes: an “Oracle” AI, which offers diagnostic advice without explanation, and an “AI Colleague,” which provides clear, interpretable reasoning similar to a human peer.
In the Oracle scenario, liability often shifts toward AI manufacturers, as medical practitioners can argue that they simply followed the advice of a certified system. The lack of transparency limits the ability of doctors to challenge or interpret the AI’s decision, reducing their personal liability exposure but potentially undermining independent judgment.
The AI Colleague model, on the other hand, places greater responsibility on practitioners. When AI explains its reasoning, doctors are expected to evaluate, question, and integrate those recommendations into their clinical decision-making. The presence of an explanation means they cannot simply defer responsibility to the machine.
The study highlights that explainability changes not just legal outcomes but also behavioral incentives. Transparent systems create accountability loops where doctors remain central decision-makers, while opaque systems risk transferring liability and trust toward manufacturers and their algorithms.
What risks do opaque AI systems create in clinical practice?
A key concern identified is the rise of defensive medicine when opaque AI systems are used. Doctors facing liability uncertainty may feel pressured to follow machine recommendations blindly, regardless of their own clinical judgment, to protect themselves from potential lawsuits.
This behavior may shield practitioners from liability but can undermine patient safety. It reduces the role of human expertise and increases reliance on automated systems that may not fully account for contextual or individual patient factors.
The authors’ game-theoretic modeling supports this finding, showing that alignment with machine recommendations becomes the dominant strategy for practitioners when AI operates as a black box. In this scenario, liability distribution inadvertently encourages over-reliance on automation.
On the other hand, explainable AI systems provide a safeguard. By presenting interpretable reasoning, these tools empower practitioners to cross-check outputs against their expertise, reducing the likelihood of blind adherence. This mitigates defensive medicine and supports more balanced clinical judgment.
What policy steps are needed to align AI liability with patient safety?
The authors argue that healthcare regulators and policymakers should treat explainability as a legal and ethical requirement rather than a purely technical feature. Certification processes for medical AI should not focus solely on accuracy metrics but must also evaluate how well systems communicate reasoning to practitioners.
By embedding explainability into certification, liability frameworks can ensure that doctors remain accountable decision-makers, while manufacturers retain responsibility for the technical performance of their systems. This balance prevents the displacement of responsibility from one actor to another and promotes safer medical practice.
The paper also points out that liability rules should be designed to prevent strategic misuse. For example, if opaque AI systems are certified without clear explainability standards, manufacturers may gain undue protection from legal claims, while practitioners could be incentivized to abdicate judgment. Policymakers must therefore create frameworks that align liability with both technical performance and clinical oversight.
The broader implication is that explainability is not simply about fostering trust in AI. It directly influences how responsibility is allocated, how practitioners behave under legal risk, and how patients are protected in complex healthcare environments.
- FIRST PUBLISHED IN:
- Devdiscourse

