Disinformation: A systemic threat fuelled by technology and human psychology

The review clarifies disinformation as the intentional creation or dissemination of false or misleading information designed to harm institutions, groups, or individuals. It distinguishes this from misinformation, which spreads without malicious intent, and malinformation, which distorts genuine information for harmful purposes.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 15-12-2025 10:13 IST | Created: 15-12-2025 10:13 IST
Disinformation: A systemic threat fuelled by technology and human psychology
Representative Image. Credit: ChatGPT

Disinformation is evolving into one of the most disruptive forces shaping public life, destabilizing democracies, amplifying social conflict, and eroding trust in institutions at a scale not seen in previous decades. A new scholarly review argues that the threat is no longer confined to political propaganda or isolated misinformation incidents but is now embedded within digital systems, social behavior, and cognitive vulnerabilities that together form a powerful engine for manipulation.

The study, titled “Disinformation: History, Drivers, and Countermeasures,” published in Encyclopedia, analyses how coordinated falsehoods emerge, spread, and persist. The review brings together findings across psychology, media studies, political science, and technology research to produce one of the most comprehensive analyses to date of how disinformation operates and how it can be countered. 

A long history of manipulation accelerated by today’s digital landscape

The authors trace the roots of disinformation back centuries, showing that although the term gained prominence during the Cold War, organized deception has long been part of religious conflicts, political struggles, wartime propaganda, and media sensationalism. What has changed is the environment in which such manipulations operate. The rise of social platforms, algorithmic amplification, frictionless sharing, and global instant communication ensures that false narratives can travel further, faster, and with greater emotional impact than ever before.

The review clarifies disinformation as the intentional creation or dissemination of false or misleading information designed to harm institutions, groups, or individuals. It distinguishes this from misinformation, which spreads without malicious intent, and malinformation, which distorts genuine information for harmful purposes. This distinction matters because the authors argue that intent drives the architecture of modern disinformation campaigns. Whether orchestrated by state actors, political movements, extremist groups, commercial fraud networks, or online communities, today’s disinformation campaigns are strategic, persistent, and carefully engineered to exploit psychological weak points.

The research highlights how technological systems act as force multipliers. Recommendation algorithms reward engagement, which disproportionately elevates sensational or emotionally charged content. Viral loops encourage impulsive sharing, often without verification. Encrypted messaging channels create closed ecosystems where false stories spread unchecked. Meanwhile, digital microtargeting allows campaigns to personalize narratives for ideological groups, making them more persuasive and harder to detect.

Another critical insight is the blurred line between top-down and bottom-up disinformation. While organized actors seed narratives, everyday users amplify them, sometimes knowingly, sometimes not, based on emotional reactions, identity alignment, or perceived social consensus. The result is a hybrid model of manipulation where elite-driven deception gains grassroots momentum, making debunking and content moderation far more difficult.

Psychological and social drivers make people vulnerable to false narratives

The study devotes substantial attention to the psychological and cognitive mechanisms that allow disinformation to succeed. Bruno and Moriggi outline a matrix of biases and mental shortcuts that shape how people absorb and share information. Processing fluency makes familiar claims appear more truthful. Repetition reinforces belief even when claims have been previously debunked. Emotions, especially fear and outrage, override analytical thinking and speed up sharing behavior. Group identity influences how narratives are interpreted, with individuals more likely to trust information that reinforces their in-group worldview.

The review explains that people are not passive recipients but active participants in disinformation ecosystems. They share false narratives not only because they believe them but because these narratives serve emotional, social, or ideological needs. Stories framed around moral dichotomies, threats to identity, or hero-villain dynamics trigger strong psychological responses. Narratives that align with pre-existing worldviews gain traction because they provide coherence and emotional satisfaction, even when factually incorrect.

Social dynamics also play a major role. Online communities often reinforce false narratives by creating feedback loops that reward conformity and silence dissent. Echo chambers, polarization, and partisan media infrastructures deepen cognitive fragmentation, making populations more susceptible to targeted manipulation. The authors note that disinformation persists not only because people are deceived but because they become invested in the narratives, treating them as part of their identity or moral mission.

The paper also highlights the interplay between cognitive limitations and modern information overload. Humans struggle to evaluate vast amounts of data, making shortcuts necessary; disinformation exploits these shortcuts. As attention becomes a scarce resource, emotional content rises in competitive advantage, allowing falsehoods that evoke strong reactions to outperform neutral or corrective information.

The authors argue that understanding these drivers is essential for effective countermeasures. Without addressing why people believe and share falsehoods, interventions risk treating symptoms rather than causes.

Countermeasures must combine cognitive, educational, technical, and policy tools

The study outlines a range of interventions, stressing that no single solution is sufficient. Effective responses must operate across multiple layers: the individual, the platform, the community, and the regulatory environment.

Psychological “prebunking,” also known as inoculation, emerges as one of the strongest evidence-based strategies. Prebunking exposes people to weakened versions of manipulative techniques, such as false dilemmas, scapegoating, or conspiracy framing, before they encounter real disinformation. This builds mental defenses that reduce susceptibility. Short prebunking videos and gamified tools have demonstrated positive effects across demographic groups.

Media literacy programs remain critical, but the study notes that they must evolve. Traditional literacy focused on evaluating sources is no longer enough; programs must train people to recognize emotional manipulation, cognitive biases, and narrative framing. Skills such as lateral reading, verification habits, and digital self-regulation are increasingly important. The authors emphasize that interventions should be evidence-driven, as poorly designed programs may inadvertently increase cynicism or mistrust.

Fact-checking and corrections continue to play an important role, especially when delivered promptly and with clear reasoning. While corrections cannot fully undo belief in false narratives, they reduce spread and help shape public understanding over time. Accuracy prompts, brief reminders encouraging people to reflect before sharing, also reduce the likelihood of forwarding false content.

Technical solutions form another critical pillar. Provenance tools, such as digital content credentials, improve transparency around the origin and editing history of media. Platform-level interventions including friction-based design, reduced algorithmic amplification of sensational content, and increased transparency in recommendation systems can slow disinformation spread. However, the authors caution that technical fixes alone cannot overcome the psychological and social dimensions of the problem.

Policy and regulatory frameworks are expanding, with the EU Digital Services Act cited as a prominent example of systemic governance. These frameworks require platforms to mitigate systemic risks, enhance transparency, share data with researchers, and apply due diligence obligations. Such regulations aim to shift responsibility from individual users toward the organizations that shape digital ecosystems.

The authors urge caution in interventions that may unintentionally suppress legitimate expression or create broad distrust. The goal, they argue, is not to eliminate skepticism but to reinforce constructive critical thinking while maintaining public trust in democratic institutions, journalism, and scientific expertise.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback