How countries are building trust systems to counter generative AI misinformation

The researchers argue that a fragmented regulatory environment not only slows effective mitigation but may unintentionally reinforce vulnerabilities. Without harmonized standards and enforcement mechanisms, platforms and developers face conflicting obligations, creating compliance loopholes and inconsistent safety expectations across borders. As a result, harmful content can shift jurisdictions, platforms, or distribution networks with minimal resistance.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 21-11-2025 22:31 IST | Created: 21-11-2025 22:31 IST
How countries are building trust systems to counter generative AI misinformation
Representative Image. Credit: ChatGPT

The accelerating spread of generative artificial intelligence (genAI) is intensifying worldwide risks tied to misinformation, disinformation, and malinformation, according to a major international review that shows a growing crisis in which governments, regulators, and technology platforms remain unprepared for the scale and speed of synthetic content now shaping online information ecosystems.

The paper, titled Building trust in the generative AI era: a systematic review of global regulatory frameworks to combat the risks of mis-, dis-, and mal-information, published in AI & Society, warns that the rapid adoption of tools such as ChatGPT, DeepSeek, Gemini, and Stable Diffusion is transforming not only how information is produced but also how people absorb and respond to it, often in ways that erode trust, heighten cognitive manipulation, and challenge the ability of democratic institutions to maintain informational integrity. 

Are existing global regulations equipped for AI-driven misinformation?

The study examines whether national and international regulatory frameworks are capable of addressing the sharp rise in synthetic misinformation. Its conclusion is direct: current structures are inadequate.

The authors document how regulatory systems have struggled to evolve at a pace that matches generative AI’s explosive growth. While some jurisdictions, such as the European Union, have moved ahead with expansive risk-based legislation targeting online platforms and AI systems, others rely on sector-specific rules, voluntary guidelines, or ad-hoc enforcement actions. This uneven landscape creates regulatory gaps that misinformation actors can exploit.

The review covers a wide range of policy models. These include Europe’s Digital Services Act, the EU AI Act, Singapore’s corrective-order-based approach, the United Kingdom’s pro-innovation framework, and the United States’ mix of platform transparency rules and federal guidance. The authors note that each approach attempts to mitigate online information risks, yet none provide a fully coherent solution for the unique challenges presented by genAI systems.

According to the authors, the shortcomings stem from rapid technological advances that outpace policymaking cycles, competing national priorities, and a lack of international coordination. Many countries still treat misinformation as a localized problem, even though genAI-amplified content is inherently transnational. This mismatch between global threat and domestic policy response emerges as a central risk factor identified in the study.

The researchers argue that a fragmented regulatory environment not only slows effective mitigation but may unintentionally reinforce vulnerabilities. Without harmonized standards and enforcement mechanisms, platforms and developers face conflicting obligations, creating compliance loopholes and inconsistent safety expectations across borders. As a result, harmful content can shift jurisdictions, platforms, or distribution networks with minimal resistance.

How does generative AI accelerate misinformation and undermine cognitive trust?

The authors introduce one of the most pressing insights: generative models heighten the cognitive risks associated with misleading information by making harmful content more personalized, more persuasive, and more difficult to detect. They highlight how AI-generated content exploits cognitive biases that shape human decision-making. Highly realistic synthetic text, images, audio, and video lower the threshold for false information to appear credible. At the same time, algorithmically personalized content strengthens confirmation loops in which individuals receive information that aligns with their beliefs, making correction efforts significantly harder.

The rising difficulty of distinguishing authentic material from synthetic fabrications, especially as AI tools become capable of producing not only targeted misinformation but also automated misinformation campaigns at massive scale. This environment fosters widespread confusion, fuels political polarization, and undermines trust in news, institutions, and democratic processes.

A key concern is the growing impact on public behavior. As genAI-generated MDM increases, people face greater exposure to emotional manipulation, deceptive narratives, and influence techniques that operate at a speed previously unattainable. Inaccurate or malicious content becomes more persistent, more adaptive, and more deeply embedded in everyday digital interactions.

The researchers further underscore that the challenge is not limited to fabricated news or malicious state actors. Everyday misinformation,  whether health rumors, manipulated product reviews, AI-generated conspiracy tropes, or deceptive online personas, becomes more prevalent when generative systems automate and scale misleading content production. This shift signals a structural, not episodic, risk to information integrity.

What strategy can rebuild trust and strengthen digital resilience?

The study proposes an integrated model intended to reinforce global responses to MDM while addressing the unique pressures of genAI. Their approach blends regulatory reform, technical safeguards, and public resilience initiatives. It outlines a series of regulatory priorities, including risk assessments for major platforms, algorithmic accountability, transparency requirements, and harm-reduction rules that place responsibility on intermediaries distributing harmful content. These measures aim to move beyond reactive enforcement toward systemic prevention.

The study also evaluates several technical tools critical to credible intervention. These include AI-driven detection technologies that identify synthetic content, provenance mechanisms that track the origin of digital media, watermarking systems, and self-auditing frameworks that require platforms to document and justify their content-moderation practices. The authors emphasize that such tools must be interoperable and globally recognized to meaningfully curb cross-border information flows.

Equally important is a strong human-centric component. The review highlights digital resilience, media literacy, and public education as essential to counteracting cognitive vulnerabilities exploited by AI-enabled misinformation. Users equipped with critical reasoning, bias awareness, and verification skills are less likely to be manipulated by AI-enhanced content. This human-focused layer, the authors argue, is indispensable for any long-term solution.

International cooperation forms the backbone of their proposal. The authors stress that without cross-border coordination, including shared standards, harmonized rules, and collaborative enforcement, the global information environment will remain exposed to regulatory inconsistencies and jurisdictional blind spots.

Their review warns that the risks posed by generative AI cannot be addressed through technical tools alone. Nor can regulation, in its current form, keep pace with innovation. Only a multi-layered strategy that integrates governance structures, platform accountability, behavioral resilience, and technological safeguards can restore trust in digital information ecosystems.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback