AI-generated disinformation now undermines trust in real news

AI has drastically changed how disinformation is produced, amplified and consumed. Generative tools such as ChatGPT, Gemini, Midjourney, Stable Diffusion and voice-cloning systems enable the rapid creation of synthetic narratives, fabricated images and deepfake videos at a scale previously impossible. This shift lowers production barriers for malicious actors, providing low-cost tools capable of generating highly persuasive, context-tailored disinformation for political, financial or ideological purposes.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 04-12-2025 10:40 IST | Created: 04-12-2025 10:40 IST
AI-generated disinformation now undermines trust in real news
Representative Image. Credit: ChatGPT

A new analysis published in Journalism & Mass Communication Quarterly warns the journalism and media sectors about the way artificial intelligence (AI) is reshaping the scale, speed and psychology of disinformation in ways that challenge democratic communication systems worldwide. The report argues that AI-powered falsehoods now pose risks that extend far beyond fabricated content, influencing public perception, undermining news legitimacy and altering the very way audiences interpret reality.

The study, titled Disinformation in the Age of Artificial Intelligence (AI): Implications for Journalism and Mass Communication”offers one of the most comprehensive examinations to date of the evolving relationship between generative AI and modern information disorder, drawing from political communication, psychology, journalism studies and emerging AI research. The authors argue that AI has transformed disinformation into a multidimensional problem that demands both technological and institutional responses.

AI transforms the creation and spread of false information

The study finds that AI has drastically changed how disinformation is produced, amplified and consumed. Generative tools such as ChatGPT, Gemini, Midjourney, Stable Diffusion and voice-cloning systems enable the rapid creation of synthetic narratives, fabricated images and deepfake videos at a scale previously impossible. This shift lowers production barriers for malicious actors, providing low-cost tools capable of generating highly persuasive, context-tailored disinformation for political, financial or ideological purposes.

The authors outline four interconnected layers that now drive AI-mediated disinformation: creation, dissemination, reception and perception. On the creation side, AI can fabricate text, audio and images that allow political operatives or foreign actors to mimic high-profile individuals, forge video evidence or produce targeted propaganda. The report highlights incidents such as AI-generated robocalls using cloned presidential voices and fabricated political scandals timed to influence election cycles. Such cases show how synthetic content can interfere with democratic processes and complicate traditional fact-checking workflows.

At the dissemination level, AI-powered bot networks, automated social accounts, and synthetic personas can amplify false messages at high velocity. These tools exploit algorithmic engagement patterns, bypass platform moderation and simulate authentic online behavior, making coordinated disinformation campaigns harder to detect. The study notes that generative AI also supports micro-targeted manipulation, allowing tailored propaganda to reach individuals based on ideology, identity, or psychological vulnerability.

But the impact extends into reception and perception, where AI changes how people interpret the credibility of news. The authors explain that individuals exposed to AI-generated falsehoods may respond differently than to traditional misinformation. Some synthetic content can appear more realistic or persuasive because it exploits cognitive shortcuts, emotional biases or contextual cues. Even when audiences identify a piece of content as manipulated, the ambiguity created by ubiquitous AI-generated media may erode overall trust in legitimate journalism.

This phenomenon, sometimes called the “liar’s dividend,” allows individuals or political actors to dismiss real events as fake, simply by invoking the possibility of AI manipulation. The authors warn that this dynamic, amplified by growing public anxiety about synthetic media, could undermine the credibility of verified information and weaken the role of journalism as a gatekeeper of truth.

Journalism faces a new era of verification challenges

The study underscores that traditional journalistic verification processes were not designed for a world saturated with synthetic media. Newsrooms now face a dual burden: verifying whether content is real and combating accusations that real content is artificially generated. This creates a complex environment where journalists must operate without eroding public trust or overemphasizing worst-case scenarios.

The authors point out that the fear surrounding AI-generated disinformation is sometimes overstated in media narratives. Experimental evidence shows that AI deepfakes are not always more persuasive than low-effort manipulated content. In some cases, simple “cheapfakes” or text-based misinformation can have similar or stronger influence. However, the authors caution that persuasive power should not be the only metric of concern. Even less convincing deepfakes can cause confusion, trigger emotional responses or reinforce existing biases, especially among individuals with high partisan motivation or low media literacy.

The paper highlights that audience susceptibility varies widely. Factors such as political ideology, prior knowledge, cognitive reflection, online experience and trust in mainstream media all shape how individuals interpret AI-enhanced disinformation. For some, highly polished synthetic images or videos may appear more credible than traditional political messaging. For others, the mere presence of AI may trigger skepticism toward all media, including legitimate journalism.

The authors warn that the cumulative effect is particularly dangerous: declining trust in journalism, increased polarization and greater reliance on partisan or conspiratorial narratives. As synthetic content becomes more prevalent, audiences may increasingly assume that anything they encounter could be fake, further weakening the common informational foundations required for democratic discourse.

Despite these challenges, the study stresses that journalists should avoid alarmist narratives that exaggerate the threat. Overstating the power of AI in producing disinformation could inadvertently legitimize false claims, fuel moral panic or undermine trust in real reporting. Instead, the authors call for measured, evidence-based communication that emphasizes context, verification and public literacy.

AI is also a tool for combating falsehoods, but risks remain

While AI plays a major role in creating and spreading disinformation, the study highlights its equally significant potential as a defensive tool. Researchers are now developing AI-assisted fact-checking systems, automated verification solutions, prebunking strategies, and audience-targeted corrections that may outperform traditional human-centered approaches.

The authors describe a range of experiments showing promising outcomes. AI-generated influencers, animated explainers and personalized corrective messages have been shown to reduce misperceptions and improve acceptance of factual information. In some controlled settings, AI-driven correction mechanisms performed better than human fact-checkers at addressing doubts about climate change or political misinformation. The scalability of these systems could enable more proactive, continuous debunking efforts across digital platforms.

Media literacy initiatives enhanced by AI personalization also appear to reduce susceptibility to false claims. AI-powered monitoring systems can detect anomalies, track narrative patterns and flag coordinated manipulation much faster than conventional analysis. As these systems improve, they may provide critical support for journalists, researchers and regulators tasked with safeguarding the information environment.

AI as a solution is not without its own risks. Dependence on automated detection tools may introduce biases or blind spots. Corrective AI systems may also be perceived as partisan or manipulative, especially in polarized political climates. And the rapid evolution of generative AI means that adversarial actors can exploit the same defensive tools, using them to bypass filters or craft more subtle disinformation tactics.

The study concludes that while AI-driven countermeasures are essential, they must be implemented with caution, transparency and interdisciplinary oversight. Solutions must balance technological efficiency with ethical considerations, privacy protection and accountability.

A call for strategic, evidence-based responses in journalism and policy

The authors argue that addressing AI-mediated disinformation requires a shift from reactive crisis response toward long-term, systemic strategies. Journalists, policymakers, educators, and technology companies must work together to strengthen the resilience of democratic information ecosystems.

The report recommends updating newsroom protocols to account for AI-generated content, investing in advanced verification training, and developing shared industry standards for identifying and labeling synthetic media. It stresses the need for governments to create regulatory frameworks that address both malicious uses of AI and false accusations of AI involvement.

Media literacy must also evolve. Public education campaigns should help citizens understand how AI-generated content works, how to evaluate sources and how to recognize psychological tactics that influence belief formation.

The authors conclude that the future of journalism will depend not only on combating disinformation, but also on rebuilding trust in credible media. The challenge is not simply distinguishing the real from the artificial, it is creating the social resilience necessary to uphold truth in an era where the boundaries of reality can be artificially blurred.

They call for pragmatic, transparent and multidisciplinary approaches that combine AI innovation with journalistic integrity and public awareness. Without these efforts, AI-driven disinformation could accelerate declines in trust, amplify societal divisions and reshape public communication in ways that threaten democratic stability.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback