AI can diagnose your illness, but don’t trust it like a doctor
A patient might feel confident in an AI’s accuracy when interpreting imaging scans but would not expect it to understand emotional distress or make decisions based on personal values. Similarly, if an AI system errs in diagnosis, the blame would not fall on the machine, but on its developers, deployers, or overseeing physicians - highlighting that moral responsibility remains human, even if decision-making becomes automated.
A new study warns that the rising influence of artificial intelligence may come at the cost of one of medicine’s most essential values: trust. In a peer-reviewed analysis published in the Journal of Medical Ethics, ethicist Dr. Joshua Hatherley argues that while medical AI can be relied upon for its diagnostic precision, it cannot, and should not, be considered trustworthy in the same sense as a human clinician.
The study, titled "Limits of Trust in Medical AI," examines the philosophical and ethical boundaries of trusting artificial intelligence in clinical practice. While AI systems, particularly those based on deep learning, are increasingly accurate in tasks like disease diagnosis, hospital readmission prediction, and drug discovery, Hatherley contends that they lack the moral agency and interpersonal grounding necessary to fulfill the kind of trust traditionally embedded in the doctor-patient relationship.
“Reliability is not the same as trustworthiness,” the study asserts. “AI systems may be accurate, consistent, and even indispensable, but they do not possess goodwill, empathy, or moral responsibility - the essential foundations of trust.”
This distinction, according to Hatherley, is more than academic. As AI systems grow more capable - sometimes even surpassing human clinicians in diagnostic performance - there is mounting pressure to defer to these technologies in key medical decisions. If patients begin to rely more on machine recommendations than on their doctors’ judgment, trust may shift from humans to machines. But that shift, the study warns, risks replacing deep, interpersonal trust with a shallow form of mere reliance.
Drawing on philosophical theories of trust from scholars such as Russell Hardin and Annette Baier, the study outlines why AI systems, as non-agential entities, cannot be subjects of trust. Trust involves more than expecting competence - it requires beliefs about the motivations and moral obligations of the trusted party. In medicine, trust emerges when patients believe their doctor not only knows what to do but wants to act in their best interests.
In contrast, Hatherley argues, AI systems have no motivations, no capacity for goodwill, and cannot be held morally responsible for their decisions. “To trust is to believe that someone is moved by the fact that you are counting on them,” he writes. “AI, lacking consciousness and agency, cannot reciprocate that trust.”
The paper uses real-world examples and thought experiments to illustrate this difference. A patient might feel confident in an AI’s accuracy when interpreting imaging scans but would not expect it to understand emotional distress or make decisions based on personal values. Similarly, if an AI system errs in diagnosis, the blame would not fall on the machine, but on its developers, deployers, or overseeing physicians - highlighting that moral responsibility remains human, even if decision-making becomes automated.
This becomes even more complex when AI systems begin to displace not just decision support tools, but the epistemic authority of clinicians themselves. If machines consistently outperform doctors in accuracy, clinicians may be obligated to defer to AI outputs to reduce errors. But as Hatherley notes, such deference could undermine the human foundation of clinical practice.
Some experts have argued that AI will not replace doctors but will instead enhance them - an “extensionist” view. Others, known as “substitutionists,” believe AI could eventually take over many of the roles traditionally held by physicians. Hatherley acknowledges the likelihood of role displacement but focuses his critique on what is lost in the process: the moral and emotional components of care that form the basis for healing relationships.
The study also critiques ongoing efforts to promote “trustworthy AI” in healthcare. Organizations including the European Commission’s High-Level Expert Group on Artificial Intelligence and tech firms like IBM have released ethical guidelines aimed at making AI systems more “trustworthy.” But according to Hatherley, this language reflects a category error: “Trustworthy AI” implies a capacity for trust that machines fundamentally lack. Instead, he recommends that policy and development efforts reframe this goal as “reliable AI,” reserving the concept of trust for relationships between human agents.
The implications of this distinction are far-reaching. If patients are encouraged to trust AI systems as they would human doctors, they may expect empathy or moral engagement that these systems cannot deliver. Worse, they may feel alienated in clinical encounters where machines dominate decision-making, reducing their sense of being seen, heard, and cared for.
As AI continues to integrate into medicine, the author calls for deeper reflection on how to preserve the human dimensions of care. He argues that rather than allowing AI to erode the centrality of trust in medicine, healthcare systems should be designed to enhance trust in clinicians, even as they integrate increasingly sophisticated digital tools.
This includes ensuring that clinicians retain meaningful roles in decision-making, that AI recommendations are transparent and explainable, and that patient relationships remain grounded in empathy and accountability. Above all, he urges healthcare leaders and developers not to mistake reliability for trustworthiness - because in medicine, what patients often need most is not just answers, but assurance.
- FIRST PUBLISHED IN:
- Devdiscourse

