Chatbot Safety: Navigating Conspiracy Theories in AI Conversations

Research from the Digital Media Research Centre highlights concerns over chatbots engaging in discussions around conspiracy theories. The study found that while some chatbots perpetuate conspiratorial discussions, others effectively implement guardrails. This raises questions about AI’s role in shaping public perception and its responsibility in dialogue moderation.


Devdiscourse News Desk | Brisbane | Updated: 24-11-2025 11:43 IST | Created: 24-11-2025 11:43 IST
Chatbot Safety: Navigating Conspiracy Theories in AI Conversations
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.
  • Country:
  • Australia

With the rise of AI technology, chatbots have become prevalent across various platforms. However, recent research points to a crucial issue: their engagement with conspiracy theories. Researchers from the Digital Media Research Centre examined this interaction, revealing that many chatbots do not adequately shut down conspiratorial conversations.

The study, now awaiting publication in M/C Journal, demonstrated that superficial guardrails often lead chatbots to present false conspiracy theories alongside factual information. This phenomenon, known as "bothsidesing," was particularly evident in discussions around notable political events and figures. While Google's Gemini bot demonstrated effective resistance by refusing to engage with recent political controversies, other models like Grok-2 Mini's Fun Mode fell short, encouraging playful yet misleading exchanges.

Amidst evolving digital landscapes, the research underscores the need for robust safety measures within AI systems. Ensuring chatbots exclude harmful misinformation is pivotal in safeguarding users from spiraling into deeper conspiratorial beliefs that could lead to damaging societal impacts.

(With inputs from agencies.)

Give Feedback