Public health systems must adopt AI to detect future outbreaks earlier
The study primarily focuses on the risks of deploying AI in public health. While opportunities are significant, the authors warn that poor implementation poses serious consequences. Errors, hallucinations, data bias, privacy concerns, and overreliance on automated systems could undermine decision-making at critical moments.
New research outlines how machine learning and large language models could reshape early warning capabilities for infectious outbreaks. As governments struggle to manage a rapid churn of biological, environmental, and behavioral risks, specialists argue that AI may soon determine how quickly the world detects and responds to the next major health emergency.
The findings come from the study "Artificial Intelligence Applications in Horizon Scanning for Infectious Diseases" that explores how AI-driven tools can strengthen Horizon Scanning processes, a strategic method used by public health agencies to detect emerging threats, interpret weak signals, and guide policy responses.
AI expands the scope of public health foresight
Horizon Scanning, traditionally based on expert insight and manual monitoring, has grown more complex as data streams multiply. The study notes that infectious disease risks now emerge not only from known pathogens but also from social instability, shifting ecological systems, and the global movement of both people and animals. AI offers a way to integrate and make sense of this expanding threat landscape.
The researchers outline the five classic stages of Horizon Scanning: defining a framework, identifying weak signals, accessing data sources, assessing signal significance, and reporting to decision-makers. According to the review, AI is capable of enhancing each stage, particularly when working alongside domain specialists.
Machine learning models can process vast datasets from surveillance systems, mobility records, laboratory outputs, online media, and environmental indicators. These models detect anomalies and patterns that might otherwise remain buried. Meanwhile, large language models are being tested for real-time scanning of global news, social media, and scientific publications, helping experts track emerging symptoms, new disease clusters, veterinary alerts, changes in human behavior, and shifts in medical research activity.
The review highlights early examples of this shift. The UK Health Security Agency has already begun using large language models to scan worldwide news feeds, automatically remove duplicated reports, and identify possible disease-related signals. Similar tools can now process multilingual content, increasing visibility into regions where early warnings are often delayed or politically constrained.
AI can also support the creation of new data. The study notes that expert surveys, such as Delphi panels, can use chatbots to test potential response patterns or simulate how different experts might interpret early signals. These simulated responses, while not scientific findings in themselves, can expose knowledge gaps, challenge group assumptions, and prompt more rigorous debate among specialists.
Weak signals become a critical test for AI systems
The paper puts significant emphasis on weak signals, defined as subtle or ambiguous early indications of potential change. Weak signals can include unexpected symptoms in animals, minor shifts in clinical reports, unusual search engine queries, sudden changes in drug purchases, or hints of social unrest that might disrupt healthcare delivery. Because they are often vague or contradictory, weak signals are difficult to interpret consistently, even for trained experts.
According to the researchers, AI provides new opportunities here but also introduces new complications. Machine learning and natural language systems excel at scanning large volumes of unstructured information, identifying patterns, and highlighting anomalies. These technologies may be able to detect clusters of symptoms, irregular mortality in wildlife, discussions in professional forums, or emerging trends in public sentiment long before traditional systems escalate alerts.
However, the authors caution that weak signals require careful human interpretation. AI systems may overstate the importance of noisy or unreliable data, misclassify risks, or offer fabricated outputs that appear authoritative. The review stresses that AI should act as a provocation tool rather than a decision-maker, a system that raises questions rather than delivers answers.
The researchers also observe that biases within training datasets can distort risk assessment. If AI tools rely heavily on English-language sources or regions with stronger digital infrastructures, they may miss early warnings from low-resource settings. This risk amplifies existing inequities in global surveillance, potentially delaying detection in areas most vulnerable to emerging infections.
At the same time, AI-driven tools can help experts test assumptions more critically. By generating alternative interpretations or simulating diverse perspectives, chatbots can reduce the risk of groupthink, which has been identified as a contributing factor in slow responses to previous pandemics. The study argues that such tools may eventually become standard components of expert workshops and scenario-planning exercises, provided strong oversight remains in place.
Governance challenges and the need for human oversight
The study primarily focuses on the risks of deploying AI in public health. While opportunities are significant, the authors warn that poor implementation poses serious consequences. Errors, hallucinations, data bias, privacy concerns, and overreliance on automated systems could undermine decision-making at critical moments.
The review identifies several areas requiring immediate attention.
First, AI-generated outputs must undergo mandatory human review. The researchers argue that the complexity of public health decision-making demands contextual knowledge, ethical reasoning, and political insight, none of which AI can provide reliably. In their view, AI should never be used to replace expert judgement but instead serve as an analytical extension of human capability.
Second, data quality remains a structural challenge. Public health data can be incomplete, delayed, or politically manipulated. AI tools may magnify these weaknesses if not properly managed. To counter this, the authors recommend diversifying data sources and developing stronger routines for verification and interpretation.
Third, privacy and security issues must be resolved before broad adoption. AI systems that monitor social media, healthcare records, or mobility data raise questions about surveillance, trust, and legality. Without clear governance, these tools risk public resistance, especially during politically sensitive outbreaks.
Fourth, the review warns against the illusion that AI is a neutral or objective actor. Because AI systems reflect the assumptions of those who design and train them, their outputs should be treated as one input among many. The authors note that AI is far from reliable enough to guide independent action and may inadvertently reinforce harmful biases.
Despite these concerns, the study remains optimistic about AI’s long-term role in Horizon Scanning. The authors advocate for building early warning dashboards that combine AI-driven analytics with human-filtered interpretation. They also recommend investing in natural language interfaces to help experts interact more easily with complex simulation models.
- READ MORE ON:
- AI in infectious disease surveillance
- Horizon Scanning AI
- early warning systems public health
- machine learning outbreak detection
- AI-powered disease monitoring
- weak signal detection health
- public health preparedness AI
- global epidemic surveillance technology
- large language models in health security
- AI-driven pandemic prediction
- FIRST PUBLISHED IN:
- Devdiscourse

