How AI is changing patient-doctor decision-making
New research suggests that AI's role in healthcare may be far more complicated than the promise of faster analysis and more personalized care. While artificial intelligence systems may strengthen care in some settings, they also risk weakening trust, autonomy, and communication if they are introduced without careful design and oversight.
The study, “AI and Shared Decision-Making: A Systematic Review,” published in AI & Society, examines whether AI can genuinely support shared decision-making in healthcare or whether it may create a new layer of technological control between patients and doctors. Based on a systematic review of 78 studies identified through PRISMA-based searches of major academic databases, the paper maps both the opportunities and the unresolved risks surrounding AI in one of medicine’s most sensitive domains: decisions about treatment, prognosis, and patient preference.
Shared decision-making, a core principle of patient-centered care, is built on the idea that medical choices should not be dictated solely by clinical evidence or physician judgment, but made through a process in which patients and clinicians exchange information, discuss options, and align care with the patient’s own values and priorities. The review argues that AI is now entering this space rapidly, with growing attention across oncology, orthopedics, cardiology, and other clinical areas. However, it also warns that AI systems are not automatically compatible with the goals of shared decision-making simply because they can generate recommendations or summarize medical data.
AI is expanding its role in shared decision-making across healthcare
The paper identifies a broad set of roles for AI in shared decision-making, including helping explain treatment options, preparing patients for consultations, improving communication about risks and benefits, supporting chronic disease management, and assisting with patient education and self-monitoring. In these applications, AI is often presented as a digital companion to clinical discussion, helping people understand complex information that might otherwise be difficult to absorb during a short medical visit.
This promise is especially attractive in high-pressure care settings where clinicians face time constraints, documentation burdens, and growing complexity in diagnosis and treatment options. According to the review, AI may help by taking on certain non-clinical tasks such as summarization, transcription, or documentation, which could leave more time for human interaction between physician and patient. That is one of the most practical arguments in favor of AI-supported shared decision-making: not that the system replaces the human conversation, but that it creates room for a better one.
The review also highlights AI’s capacity to personalize information more deeply than traditional decision aids. Standard decision support materials often present general treatment comparisons or average risk estimates. AI tools, by contrast, can in some cases adapt recommendations and explanations to individual patient characteristics, medical history, and stated priorities. The authors note that some systems reviewed in the literature were designed to incorporate patient outcome priorities and values into the recommendation process, at least in principle. That ability to tailor information is one reason AI is being seen as a potentially transformative tool for shared decision-making rather than just an efficiency upgrade.
The paper suggests that this personalization could make clinical conversations more relevant and comprehensible. Patients facing difficult choices often need more than raw probabilities or standard treatment descriptions. They need support in relating those options to daily life, personal risk tolerance, family responsibilities, long-term goals, and emotional concerns. AI, the review suggests, may help make those discussions more specific and accessible when it is designed around the patient’s real decision context rather than abstract clinical endpoints alone.
This is particularly important in specialties such as oncology and cardiology, where treatment decisions can involve tradeoffs between survival, side effects, quality of life, future function, and uncertainty. In such cases, shared decision-making is not just about giving patients more information. It is about helping them understand how different outcomes matter to them personally. The review argues that AI could support that goal, but only if it is built to work with patient values instead of treating those values as secondary to statistical optimization.
The biggest risks are opacity, bias, and a new form of digital paternalism
One of the key concerns is the use of black-box AI systems whose reasoning cannot be clearly understood by patients and may not be fully understood even by clinicians. If a treatment recommendation emerges from a system that offers little meaningful explanation, the review argues that shared decision-making may be undermined at its core. Patients cannot participate fully in a decision if neither they nor their physician can justify why one option is being promoted over another.
This concern goes beyond technical explainability in the narrow sense. The authors argue that healthcare AI must not only produce outputs that are understandable, but outputs that can be justified in light of the patient’s own circumstances and values. That is a higher bar. A model might explain that a recommendation was driven by risk scores, imaging patterns, or predicted survival curves, yet still fail to address whether that recommendation fits what the patient actually wants from care. In shared decision-making, justification matters as much as prediction.
The review also raises a major warning about what it describes as a potential “computer knows best” scenario. Many AI systems are designed primarily around clinical targets such as disease control, risk reduction, or survival prediction. Those goals may be important, but they do not automatically reflect what matters most to a given patient. A patient may prioritize independence over longevity, symptom relief over aggressive treatment, or family obligations over a statistically optimal intervention. If AI systems fail to incorporate these softer but decisive factors, they may reinforce a paternalistic model of care under the appearance of neutrality and precision.
That danger is further amplified by the authority often granted to algorithmic outputs. In healthcare, numerical or model-based recommendations can carry a strong aura of objectivity, even when they are shaped by incomplete datasets, biased assumptions, or limited value frameworks. The review notes that this can affect both clinicians and patients, potentially narrowing the conversation instead of enriching it. Rather than opening space for deliberation, AI could close it by making one option appear technically superior even when the patient’s personal priorities point elsewhere.
Bias, reliability, privacy, and accountability also emerge as major issues. The authors note that flawed or outdated information, biased training data, and weak governance structures can all compromise trust in AI-supported decisions. In shared decision-making, trust is not a peripheral concern. It is the basis on which patients disclose preferences, ask questions, and accept uncertainty. If AI systems appear unfair, unaccountable, or inscrutable, they may damage the relational foundation that shared decision-making depends on.
The review is equally clear that AI cannot supply empathy, emotional intelligence, or moral sensitivity in the way human clinicians can. Sensitive decisions about cancer treatment, end-of-life care, surgery, or chronic disease management often involve fear, grief, ambivalence, and shifting preferences. These are not problems that can be solved with recommendation engines alone. The authors therefore stress that patients and clinicians generally prefer AI to remain assistive, with physicians retaining final responsibility and adapting technology-supported guidance to each patient’s situation.
Future of AI in healthcare will depend on design, co-creation, and clinical responsibility
Many current AI systems were not built with shared decision-making as a primary design goal. Instead, they were often developed for prediction, triage, classification, or workflow optimization, then later discussed as if they could naturally fit into patient-centered care. The paper argues that this assumption is flawed. Supporting shared decision-making requires design choices that explicitly account for communication, patient values, trust, fairness, and the social realities of clinical encounters.
The authors call for stronger co-design involving patients, clinicians, and other stakeholders from the beginning of development. That means AI tools should not be introduced into shared decision-making as finished products built around technical convenience alone. They should be shaped by the people who will rely on them in emotionally difficult and ethically significant conversations. Without that process, the review suggests, AI systems may continue to reflect institutional priorities more than patient needs.
Training is another major issue. The paper emphasizes that healthcare professionals need better education on how AI tools work, where their limitations lie, and how to explain their role to patients. A clinician who cannot interpret or communicate the meaning of an AI-generated recommendation is unlikely to use it in a way that strengthens shared decision-making. In that sense, the challenge is not just technological literacy but communicative competence. AI-supported care will still depend on whether clinicians can translate outputs into discussions patients can trust and understand.
The review also implies that regulation and institutional governance will have to catch up quickly. If AI is to be integrated into shared decision-making responsibly, healthcare systems will need clearer rules on transparency, accountability, patient consent, privacy protection, and the division of responsibility between clinician and machine.
- FIRST PUBLISHED IN:
- Devdiscourse

