Generative AI may be driving a global breakdown in shared reality


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 09-01-2026 19:38 IST | Created: 09-01-2026 19:38 IST
Generative AI may be driving a global breakdown in shared reality
Representative Image. Credit: ChatGPT

A new study warns that the deeper risk of generative artificial intelligence, genAI, lies beyond fake images or cloned voices. The concern is no longer limited to misinformation as isolated incidents, but to a broader breakdown in how societies decide what is real.

That warning comes from a new research paper titled The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth, published on arXiv. The study argues that generative AI is enabling what it defines as “synthetic reality,” a layered system where content, identity, interaction, and institutions can all be partially manufactured, with serious consequences for democracy, governance, and everyday life.

From fake content to synthetic reality

Public debate around generative AI has largely focused on deepfakes, scams, and fabricated media. According to the study, that framing underestimates the scale of the challenge. Generative AI does not merely produce individual fake artifacts. It enables the construction of entire information environments that are coherent, interactive, and often personalized, making them difficult to detect or challenge from within.

The research introduces the concept of synthetic reality as a progression beyond synthetic media. In this environment, AI-generated text, images, audio, and video are combined with synthetic identities such as cloned voices, fake documents, or fabricated online personas. These identities are then embedded into interactive systems like chatbots, automated messaging, or simulated social interactions. Over time, these layers reinforce each other, creating narratives that feel socially validated and emotionally persuasive.

This shift matters because belief formation is not a passive act. People rely on context, repetition, social cues, and institutional signals when deciding what to trust. When AI systems can generate not just content but also witnesses, conversations, and documentation, the traditional signals of authenticity lose their value. The study argues that this marks a fundamental change in the balance between trust and verification.

Earlier forms of deception required skill, time, and resources. Generative AI collapses those costs. Convincing artifacts can now be produced quickly, cheaply, and in high volume. This allows malicious actors to test, refine, and deploy deceptive content at scale, often faster than institutions can respond. The result is not simply more falsehoods, but a growing pressure on the systems that societies rely on to establish shared facts.

Why generative AI changes the economics of deception

The paper identifies several qualitative shifts that explain why generative AI represents a structural risk rather than an incremental one. First is cost collapse. Tools that once required professional expertise in video editing, design, or social engineering can now be operated with minimal skill. This expands the pool of potential attackers and shortens the time between intent and execution.

Second is scale and throughput. Generative AI allows thousands of content variations to be produced and tested rapidly. This enables flooding strategies that overwhelm human attention, moderation systems, and journalistic verification. Instead of detecting a few suspicious items, platforms and institutions must contend with streams of plausible but false material.

Customization is another key factor. AI-generated deception can be tailored to specific organizations, communities, or individuals. Messages can mimic internal workflows, cultural norms, or personal relationships, reducing the cues that would normally trigger skepticism. This makes scams and manipulation more effective, particularly in high-stakes contexts like financial authorization or political communication.

The study also highlights the rise of hyper-targeted persuasion. Rather than broadcasting a single narrative, AI systems can deliver different messages to small segments, each optimized for specific fears or beliefs. This undermines collective correction because different groups may never encounter the same claims or rebuttals. Over time, this segmentation can deepen polarization and make public consensus harder to achieve.

Interactive persuasion represents another escalation. Automated conversational agents can adapt in real time, probe uncertainty, and build rapport over extended interactions. This transforms social engineering from a static message into an ongoing relationship, increasing the likelihood of compliance or belief change.

Finally, the research points to detection limits and provenance gaps. While tools like watermarking and AI detection exist, they are fragile in open ecosystems where content can be modified, recompressed, or remixed across platforms. More importantly, institutions often need more than probabilistic judgments. Courts, newsrooms, and financial systems depend on clear chains of custody and auditable evidence, which are increasingly difficult to establish when high-quality fabrications are cheap and abundant.

Together, these shifts erode trust and create what the study describes as plausible deniability. As synthetic content becomes common, authentic evidence can be dismissed as fake. This dynamic benefits actors who wish to delay accountability or exploit uncertainty, increasing the overall cost of establishing truth.

Institutional strain and the rising cost of truth

The consequences of synthetic reality extend beyond individual harm to systemic risk. The paper documents how recent incidents between 2023 and 2025 already reflect this shift. High-conviction impersonation scams have exploited video and voice cloning to bypass internal controls in organizations. Election-related messaging has used AI-generated voices and targeted outreach to confuse or suppress voters. Non-consensual synthetic imagery has amplified harassment and reputational damage. Fake receipts, invoices, and documents have weakened routine verification processes. Even AI model supply chains have been compromised, embedding risks upstream in the tools themselves.

Across these cases, the pattern is consistent. A convincing artifact is produced at low cost, inserted into a workflow where trust is assumed, scaled through automation or repetition, and corrected only after damage has occurred. Institutions absorb the fallout in the form of financial loss, reputational harm, higher compliance costs, and public skepticism.

The study argues that this creates an “epistemic tax” on society. More time, money, and effort are required to verify claims, authenticate evidence, and resolve disputes. Newsrooms face tighter deadlines and higher verification burdens. Courts and regulators must contend with contested evidence. Businesses add layers of checks that slow transactions. Ordinary people are left unsure which sources to trust.

At its most extreme, this dynamic risks normalizing cynicism. If digital evidence is assumed to be unreliable by default, accountability weakens. The paper frames this as the Generative AI Paradox: the more realistic synthetic media becomes, the more rational it may seem to doubt all digital information. Truth becomes harder to establish, and verification becomes a privilege rather than a norm.

To address this, the research rejects the idea of a single technical fix. Detection tools alone cannot keep pace with adversarial adaptation. Watermarking cannot cover all content. Takedowns often arrive after harm has spread. Instead, the paper proposes a layered mitigation strategy aligned with the layered nature of synthetic reality.

Provenance infrastructure can help establish chains of custody for high-stakes content, especially when adopted end to end by institutions. Platform governance can reduce harm by adding friction to virality, limiting amplification during sensitive periods, and responding quickly to impersonation and harassment. Institutions must redesign workflows around the assumption that forgery is cheap, shifting from artifact-based trust to process-based trust. Public resilience efforts should focus on calibrated skepticism and reliance on authenticated channels rather than expecting individuals to spot every fake. Policy and accountability measures can raise the cost of abuse without stifling legitimate uses.

The study also outlines a research agenda centered on measuring epistemic security. Rather than asking whether a single piece of content is real, the focus should be on whether systems can sustain shared understanding and accountable decision-making under pressure. Metrics such as verification load, correction latency, and attribution stability are proposed as ways to assess resilience.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback