AI chatbots show promise in public health emergencies, but ethical gaps remain

The COVID-19 pandemic served as a stress test for global healthcare systems and a catalyst for AI chatbot deployment. Chatbots such as WHO's Health Alert on WhatsApp, the CDC's Clara, and Babylon Health's COVID-19 care assistant demonstrated how AI tools could disseminate accurate health information, reduce hospital congestion, and promote behavioral compliance such as mask-wearing and vaccination uptake.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-03-2025 20:18 IST | Created: 28-03-2025 20:18 IST
AI chatbots show promise in public health emergencies, but ethical gaps remain
Representative Image. Credit: ChatGPT

Artificial intelligence (AI)-based chatbots can a pivotal role in managing public health emergencies, reveals a new peer-reviewed study that calls for the integration of ethical design, government support, and continuous improvement to unlock their full potential. The findings, published in the journal Future Internet, emerge amid growing interest in leveraging AI to support overburdened healthcare systems during crises such as pandemics and natural disasters.

The review "The Role of AI-Based Chatbots in Public Health Emergencies: A Narrative Review," led by Arpitha S. Venkatesh and colleagues, analyzed a decade of research on chatbot deployment across various emergency scenarios. The study highlights how AI chatbots served as accessible, scalable, and cost-effective tools for health communication, mental health support, triaging, and public engagement. However, the authors warned that without coordinated governance, ethical oversight, and adaptive learning capabilities, chatbot interventions risk being ineffective or even harmful.

The COVID-19 pandemic served as a stress test for global healthcare systems and a catalyst for AI chatbot deployment. Chatbots such as WHO's Health Alert on WhatsApp, the CDC's Clara, and Babylon Health's COVID-19 care assistant demonstrated how AI tools could disseminate accurate health information, reduce hospital congestion, and promote behavioral compliance such as mask-wearing and vaccination uptake.

According to the study, AI chatbots proved especially valuable in low-resource and high-risk environments, where traditional healthcare delivery is disrupted. Their 24/7 availability, multilingual capabilities, and anonymity made them suitable for addressing stigmatized issues, such as mental health during quarantine or sexual health in disaster-stricken regions. The review found that in crisis communication, chatbots supported rumor control, corrected misinformation, and maintained trust through transparent, timely updates.

Despite these benefits, the authors emphasize that chatbot performance varied widely due to inconsistent design principles, limited user feedback loops, and insufficient contextual adaptation. The review found that most deployed chatbots during emergencies lacked real-time learning capabilities, leading to outdated or culturally insensitive responses. Moreover, the absence of standardized evaluation frameworks made it difficult to measure effectiveness across deployments.

Ethical concerns emerge as a key theme. The authors cite instances where chatbots disseminated inaccurate information, leading to public confusion or delayed care. Privacy risks were also highlighted, particularly in regions lacking robust data protection laws. The review calls for human-in-the-loop systems and explainable AI mechanisms to ensure accountability and user safety.

The review further notes the digital divide as a barrier to equitable access. Populations with limited internet connectivity, low digital literacy, or disability-related challenges were often excluded from chatbot services. The study recommends inclusive design practices, offline functionality, and integration with SMS or interactive voice response systems to bridge this gap.

In addition to technical and ethical recommendations, the study calls for stronger institutional coordination. Public-private partnerships, clear regulatory guidelines, and long-term funding were identified as prerequisites for sustainable chatbot deployment. The authors cite examples from Singapore and South Korea, where government-endorsed chatbot platforms provided unified messaging and consistent service delivery during COVID-19.

The researchers also examined the potential of chatbots in mental health response, a domain increasingly strained during public health crises. AI chatbots such as Wysa, Woebot, and Youper were found to reduce anxiety, loneliness, and depressive symptoms through cognitive behavioral techniques and conversational support. However, the review stresses that these tools are not substitutes for clinical care and should be positioned as preliminary support or triage tools.

Another key insight involves user trust and adoption. The study found that transparency regarding chatbot capabilities, limitations, and data use policies significantly influenced public engagement. Chatbots that disclosed they were not human, clarified their scope of advice, and linked to credible sources enjoyed higher usage rates and user satisfaction.

To address the knowledge gap in chatbot efficacy, the authors urge the development of standardized evaluation metrics. These should assess user engagement, response accuracy, emotional resonance, and health outcomes. The study also recommends interdisciplinary collaboration between computer scientists, public health experts, psychologists, and ethicists to design and audit chatbot systems.

The researchers further envision the next generation of AI chatbots as more intelligent, empathetic, and context-aware. Future systems should incorporate natural language processing enhancements, cultural sensitivity, and real-time learning from user interactions. The integration of AI chatbots into broader public health infrastructure, including contact tracing, telemedicine, and electronic health records, was proposed as a long-term strategy for resilience.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback