Generative AI is undermining the foundations of trust


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-02-2026 18:54 IST | Created: 03-02-2026 18:54 IST
Generative AI is undermining the foundations of trust
Representative Image. Credit: ChatGPT

Technologies designed to increase the fidelity, accessibility, and scale of information are instead destabilizing the conditions required to verify truth. For instance, a new study warns that while generative artificial intelligence is rapidly transforming how information is produced, circulated, and perceived, its most dangerous impact may not be misinformation itself, but the slow collapse of trust in digital evidence.  

The research paper, titled The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth, is published in Future Internet. The study argues that generative AI is driving a structural shift in the information ecosystem that threatens journalism, elections, courts, financial systems, and everyday civic life.

From synthetic media to synthetic reality

The study moves beyond familiar debates about deepfakes and online misinformation to describe a broader phenomenon that The author terms “synthetic reality.” Rather than focusing on isolated fake artifacts, the research shows how generative AI enables the construction of entire information environments where content, identity, interaction, and institutional signals are jointly manufactured.

At the most basic level, generative AI produces synthetic content, including realistic text, images, audio, and video at near-zero marginal cost. What makes this capability destabilizing, the study argues, is not realism alone but scale and variation. Adversaries can now generate thousands of plausible alternatives, flood information channels, and rapidly adapt content to specific audiences or events.

This content layer becomes far more powerful when paired with synthetic identity. Voice cloning, face reenactment, and document fabrication allow artificial systems to convincingly impersonate specific individuals or institutions. Signals that once served as reliable shortcuts for trust, such as recognizing a familiar voice or reviewing official-looking paperwork, no longer provide meaningful assurance.

Synthetic interaction adds a further layer of risk. Conversational agents can sustain dialogue, probe uncertainty, adapt emotional tone, and build long-term persuasive relationships. The study shows that belief formation is shaped not only by what people see, but by social reinforcement, feedback, and interaction. When AI systems simulate social presence at scale, deception shifts from static misinformation to dynamic persuasion.

At the institutional level, these layers converge to create systemic stress. Courts, election offices, financial institutions, and newsrooms rely on verification workflows built for a world where forging high-quality evidence was costly and rare. Generative AI collapses that assumption. The result is rising verification load, slower decision-making, and contested evidence that undermines accountability.

The author argues that synthetic reality is not merely a media problem but a systems risk. When entire contexts of belief can be fabricated, the question is no longer whether a single piece of content is fake, but whether institutions can still converge on shared facts at all.

The Generative AI paradox and the collapse of verification

As generative systems make digital artifacts more convincing, they simultaneously make them less useful as evidence. In a world where any document, image, or recording could be synthetically produced, plausibility loses its value as a signal of truth.

The study describes this as a market failure in the “truth economy.” Generative AI drives the cost of producing high-fidelity information toward zero, while pushing the cost of verification toward infinity. Individuals and institutions are forced to invest more time, expertise, and infrastructure simply to establish baseline authenticity.

This shift creates two dangerous failure modes. The first is credulity, where people accept convincing fabrications because verification is too costly or slow. The second is cynicism, where authentic evidence is dismissed as potentially fake, enabling denial, delay, and plausible disavowal. Both outcomes weaken accountability and favor actors who benefit from confusion.

The paradox is already visible in real-world incidents analyzed in the study. High-conviction impersonation fraud has led to major financial losses by exploiting trusted workflows rather than technical vulnerabilities. Election-adjacent manipulation has used cloned voices and synthetic outreach to confuse voters while evading collective correction. Non-consensual synthetic imagery has produced persistent harm that outpaces platform moderation.

The study simply highlights the corrosion of routine verification. Fake receipts, invoices, emails, and identity documents are increasingly realistic, overwhelming human review and forcing institutions toward automated checks that still lack reliable provenance. As documentation loses default credibility, everyday transactions accrue friction and error costs.

The cumulative effect is what the author describes as an “epistemic tax” on society. More resources are required to establish what happened, who said what, and which claims are trustworthy. These costs fall unevenly, disproportionately affecting individuals and communities with limited access to verification tools or institutional protection.

Why detection alone cannot solve the problem

The study asserts that better detection tools are necessary but insufficient. While classifiers and watermarking can identify some synthetic outputs, they cannot restore trust in an environment where believable content is abundant and adversaries adapt quickly.

Detection systems are probabilistic and fragile under compression, remixing, and cross-platform distribution. More importantly, institutions often require auditable certainty rather than likelihood scores. Courts, elections, and financial systems cannot rely on confidence estimates alone when stakes are high.

The deeper issue is a provenance gap. Without reliable chains of custody, authenticated capture, and process-level safeguards, even accurate detection does little to resolve contested evidence. The absence of standardized provenance infrastructure leaves institutions vulnerable to both forgery and denial.

The paper also points out that generative AI enables automation of social engineering. Deception no longer depends on a single artifact being convincing, but on sustained interaction that adapts to targets over time. This transforms information integrity from a content problem into a relational and institutional challenge.

As synthetic content floods information channels, correction becomes slower and less visible than initial exposure. Micro-segmentation ensures that different groups encounter different narratives, undermining shared rebuttal and amplifying polarization.

The study warns that without structural intervention, societies may rationally retreat from trusting digital evidence altogether. In such an environment, truth becomes a privilege tied to access to authenticated channels, legal resources, and institutional power.

Building epistemic security in the age of AI

Rather than proposing a single solution, the study outlines a layered mitigation approach aimed at restoring value to the truth economy. Key to this approach is the concept of epistemic security, defined as the capacity of socio-technical systems to sustain shared reality and accountable decision-making under adversarial pressure.

Provenance infrastructure is identified as a foundational element. Cryptographic signing, secure capture, and content credentials can raise confidence in authenticated media, particularly for high-stakes communications. While provenance cannot cover all content, its asymmetric value lies in making verified information cheaper to trust than unverified material.

Platform governance is another critical layer. The study argues for friction as a legitimate safety tool, including limiting algorithmic amplification during high-risk events, slowing virality of unverified media, and strengthening response pathways for impersonation and harassment. These measures aim to reduce harm without requiring perfect classification.

Institutional redesign is presented as unavoidable. Workflows must assume that forged identity and documentation are cheap. This includes out-of-band verification, multi-factor authorization that does not rely on perceptual cues, and process-based trust rather than artifact-based trust.

Public resilience also plays a role, though the study cautions against shifting the burden entirely onto individuals. Rather than asking users to spot fakes, epistemic hygiene should emphasize reliance on authenticated channels for critical information and awareness of manipulation tactics.

Furthermore, the paper calls for policy interventions that increase accountability without suppressing legitimate use. Disclosure requirements, rapid response obligations, and minimum authentication standards in sensitive domains are framed as ways to rebalance incentives rather than censor speech.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback