Human bias and AI systems create perfect storm for online misinformation

Platforms such as TikTok and other algorithm-driven services are highlighted as prime environments where false content gains traction and is transformed into tools of radicalization. According to the analysis, this interaction between human cognitive limitations and automated systems explains why misinformation has become one of the defining challenges of the digital age.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-09-2025 09:27 IST | Created: 29-09-2025 09:27 IST
Human bias and AI systems create perfect storm for online misinformation
Representative Image. Credit: ChatGPT

A growing body of research is pointing to the dangers of misinformation in online spaces, and a new review paper published in AI & Society highlights the crucial role algorithms play in amplifying false content.

The review, authored by B.V.E. Hyde of the University of Bristol and Bangor University, examines Donghee Shin’s book Artificial Misinformation: Exploring Human-Algorithm Interaction Online, published by Palgrave Macmillan in 2024. Hyde argues that Shin’s work provides valuable insight into how human vulnerabilities and algorithmic systems interact to accelerate misinformation, while also exposing important gaps and ethical risks.

How do humans and algorithms fuel the spread of misinformation?

The review outlines Shin’s key claim that human cognition and algorithmic design reinforce each other in ways that allow misinformation to thrive. Humans, described as prone to mental shortcuts, rely on heuristics that make them especially susceptible to repeated messages. Algorithms, particularly recommendation engines used by social media platforms, exploit these biases by creating popularity signals that reward repetition and visibility.

This dynamic creates a feedback loop in which misinformation spreads more rapidly, escalates in intensity, and fuels polarization. Platforms such as TikTok and other algorithm-driven services are highlighted as prime environments where false content gains traction and is transformed into tools of radicalization. According to the analysis, this interaction between human cognitive limitations and automated systems explains why misinformation has become one of the defining challenges of the digital age.

Hyde underscores that Shin’s work positions misinformation as more than just individual falsehoods. It evolves into a structural problem, where platform design, business incentives, and user psychology combine to create an ecosystem that rewards and amplifies misleading or manipulative content.

Why do false beliefs persist even after correction?

A key point raised in the review is that misinformation remains influential even when it is debunked. Once people form mental models incorporating false information, these components are not easily removed. This persistence makes debunking strategies less effective than prevention.

Shin argues for a preventative approach centered on prebunking, where users are cautioned in advance about potential misinformation and guided toward more reliable sources before false content can take root. His proposed solutions include algorithmic nudges that steer users toward diverse information environments, breaking them out of echo chambers that reinforce existing beliefs.

The book also introduces the concept of a cognitive vaccine. By warning users about misinformation and exposing them to anticipatory counterarguments, it seeks to inoculate them against false narratives before they encounter them in the wild. Experiments cited in the work suggest that people who are made aware of the threat of misinformation in advance are less likely to believe or spread it.

Hyde recognizes these contributions as practical and innovative but also warns of their limitations. The strategies rely heavily on assumptions about how scientific and cognitive habits operate, and they may unintentionally mimic forms of indoctrination. Labeling information as misinformation or conspiracy without clear standards risks undermining intellectual humility and could discourage open debate.

What are the strengths and weaknesses of Shin’s approach?

Hyde acknowledges the value of Shin’s interdisciplinary analysis, which draws from psychology, journalism, and technology. By focusing on the role of algorithms, Shin contributes to a growing applied literature that looks at how artificial intelligence intersects with misinformation. This perspective is timely and relevant, given the increasing reliance on AI systems to manage and distribute information online.

However, the review also points to shortcomings. Hyde criticizes the limited engagement with existing philosophical and theoretical literature on misinformation, conspiracy theories, and public trust. Scholars such as Brian Keeley, Naomi Oreskes, Quassim Cassam, and others have long studied the epistemological and ethical dimensions of misinformation, yet their work is largely absent from Shin’s analysis. This selective engagement, Hyde argues, weakens the interdisciplinarity claimed by the book.

Another concern is Shin’s selective use of evidence. While the book emphasizes how algorithms inherit and amplify human biases, it does not fully consider arguments that AI systems might also reduce bias under certain conditions. Additionally, proposals to cultivate scientific habits of cognition as a defense against misinformation are seen as overly optimistic. Critics argue that science itself is not free from norms, biases, and social influences, raising questions about whether adopting “scientific thinking” necessarily leads to more objectivity.

Ethical concerns also loom large. The inoculation strategy, while promising, risks being interpreted as a top-down imposition of authority, where certain narratives are pre-labeled as false. Hyde warns that this approach could silence legitimate dissent or alternative viewpoints if misapplied.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback