Artificial Intelligence in Health Care Needs Governance, Not Hype, to Truly Deliver Benefits

AI is rapidly entering health systems, but this book warns that adoption is outpacing evidence, regulation and workforce readiness, creating risks to safety, trust and equity. It argues that AI must remain a tightly governed tool, used with human oversight and strong rules, so technology serves public health rather than undermining it.


CoE-EDP, VisionRICoE-EDP, VisionRI | Updated: 09-02-2026 09:54 IST | Created: 09-02-2026 09:54 IST
Artificial Intelligence in Health Care Needs Governance, Not Hype, to Truly Deliver Benefits
Representative Image.

Artificial intelligence is no longer an abstract future technology in health care. It is already helping analyse scans, manage hospital workflows, forecast disease outbreaks and draft clinical notes. But according to a new report by researchers from the European Observatory on Health Systems and Policies, the London School of Hygiene & Tropical Medicine, the Royal Free London NHS Trust and the AI Centre for Value Based Healthcare, health systems are adopting AI faster than they understand it. The authors argue that the real challenge is not whether AI can transform health care, but whether governments and institutions are prepared to use it responsibly.

AI is not one thing, and it is not as intelligent as humans

One of the book’s key messages is that “AI” is often treated as a single, almost magical force, when in reality it covers very different tools. Some systems predict risks based on past data, others analyse images such as X-rays, and newer generative models produce text, images, or summaries by guessing what comes next based on patterns. None of these systems understands meaning or truth in a human sense. This matters because generative AI can produce confident but false information that looks believable. In health care, where decisions can have life-or-death consequences, confusing fluency with accuracy can be dangerous.

Trust is fragile and easy to lose

The authors stress that trust in AI cannot be assumed. Health professionals and patients must be able to rely on AI tools, but not blindly. When people trust AI too much, they may stop questioning its outputs, even when something feels wrong. This “automation bias” can weaken professional judgement rather than support it. At the same time, a lack of transparency about how systems work can make users sceptical or fearful. The book argues that trust must be earned through clear limits on what AI can do, strong human oversight, and ongoing monitoring once tools are deployed in real health settings.

Where AI helps most, and where caution is needed

AI already shows promise in several areas of health care, especially where it reduces routine administrative work. Tools that automate documentation, scheduling and data management can free up time for health workers without making high-risk decisions. In clinical care, AI is being tested for imaging, diagnostics and decision support, but most evidence still comes from pilot projects rather than everyday use. In public health, AI could help forecast outbreaks, analyse behaviour and target interventions more precisely, but only if data systems are strong and public trust is protected. Without good governance, these tools could also increase surveillance or exclusion.

Equity, power and the need for rules

A central concern of the book is fairness. AI systems trained on incomplete or biased data can reinforce existing inequalities, affecting marginalized groups the most. There are also global risks. Most advanced AI systems are developed by a small number of companies and countries, raising questions about who controls data, infrastructure and decision-making. The authors argue that regulation is essential, not to slow innovation but to guide it. They point to the European Union’s AI Act and long-standing human rights frameworks, such as the Oviedo Convention, as efforts to ensure that technology serves people, not the other way around.

The real choice is political, not technical

AI will not fix underfunded health systems, staff shortages, or weak public institutions on its own. Used carefully, it can support professionals, improve efficiency and strengthen public health planning. Used carelessly, it can deepen inequality, blur accountability and damage trust. The future of AI in health will be shaped less by algorithms than by policy choices. Demystifying AI, the authors argue, means keeping humans firmly in charge, and remembering that technology should always be a means to better health, not an end in itself.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback