Truth crisis in AI era is human, not technological
Generative AI tools have removed traditional barriers to producing convincing false content. What once required institutional resources can now be achieved by individuals with basic technical access. This shift has blurred the boundary between producers and consumers of information, accelerating the spread of misinformation, disinformation, and malinformation across digital platforms.
Artificial intelligence (AI) has transformed the scale and speed of disinformation, pushing societies into a new phase of informational instability. Deepfakes, synthetic media, and algorithmically amplified falsehoods now circulate with unprecedented ease, challenging long-standing assumptions about truth, credibility, and public trust. While governments and platforms have responded with regulations, fact-checking tools, and digital literacy campaigns, a growing body of research suggests these measures are failing to address the root of the problem.
The study titled Know Thyself to Know the Truth: Fighting Disinformation in the Age of Artificial Intelligence Through Foundational, Psychological and Emotional Literacy, published in European View, contends that the crisis of disinformation is not primarily technological but epistemological, rooted in how people understand language, process emotions, and interpret reality in an AI-mediated information environment.
The study states that without strengthening foundational literacy, psychological awareness, and emotional regulation, even the most advanced regulatory frameworks will remain incomplete.
From propaganda to deepfakes, disinformation evolves faster than defenses
Distorted information is not a modern invention. From ancient political smear campaigns to twentieth-century state propaganda, societies have always faced deliberate attempts to mislead. What distinguishes the current moment is the democratization of manipulation through artificial intelligence.
Generative AI tools have removed traditional barriers to producing convincing false content. What once required institutional resources can now be achieved by individuals with basic technical access. This shift has blurred the boundary between producers and consumers of information, accelerating the spread of misinformation, disinformation, and malinformation across digital platforms.
The research carefully distinguishes these categories. Misinformation refers to false information shared without harmful intent. Disinformation involves deliberate deception. Malinformation consists of true information weaponized to cause harm. In practice, these forms often overlap, creating complex information disorders that are difficult to identify and contain.
AI has intensified these dynamics by enabling deepfakes that replicate voices, faces, and gestures with high precision. These synthetic artifacts undermine traditional markers of authenticity, making visual and auditory evidence less reliable as indicators of truth. As a result, citizens face a paradoxical environment in which everything can be manipulated and nothing feels fully trustworthy.
The study identifies three phases in the evolution of information disorder. The first was dominated by centralized propaganda. The second emerged with the rise of social media and algorithmic amplification. The third, unfolding now, is characterized by AI-driven content generation that scales manipulation while fragmenting accountability. In this phase, trust erodes not only in media but in institutions, expertise, and democratic processes themselves.
Polarization and skepticism become twin threats to social cohesion
The study analyses how disinformation reshapes public behavior. Exposure to manipulated content does not produce a single uniform response. Instead, it drives two opposing but equally corrosive reactions: polarization and extreme skepticism.
Polarization emerges when individuals retreat into echo chambers and filter bubbles, environments that reinforce existing beliefs while excluding dissenting views. Algorithmic curation amplifies emotionally charged content, rewarding outrage and certainty over nuance. Over time, this process narrows perspectives and hardens group identities, making dialogue across differences increasingly difficult.
At the same time, widespread awareness of manipulation fuels skepticism. As people recognize the prevalence of fake news and deepfakes, some respond by questioning all sources of information indiscriminately. This generalized distrust does not strengthen critical thinking. Instead, it paralyzes judgment, discourages civic participation, and weakens democratic legitimacy.
Skepticism, as the study notes, is often underestimated as a societal risk. While polarization attracts more attention, unchecked cynicism can be just as damaging. When citizens no longer believe that truth is attainable, institutions lose authority, public cooperation declines, and democratic decision-making becomes fragile.
Underlying both reactions are psychological mechanisms that disinformation exploits. Cognitive biases such as confirmation bias, availability bias, and anchoring shape how individuals interpret information. Emotional triggers, social identity, and the desire for group belonging further influence belief formation. The study highlights that younger generations, who spend more time in algorithmically curated environments, are particularly vulnerable to these effects.
The research states that these vulnerabilities are not failures of intelligence but of awareness. Most individuals are unaware of how their emotions and cognitive shortcuts shape their perception of truth. Disinformation succeeds not because people lack facts, but because they lack tools to recognize how information interacts with their inner responses.
Education, not just regulation, holds the key to long-term resilience
While acknowledging the importance of regulatory efforts such as the EU AI Act and the Digital Services Act, the study finds that regulation alone cannot resolve the disinformation crisis. Laws can impose transparency requirements and platform obligations, but they cannot govern how individuals interpret and internalize information.
The research also critically assesses existing digital literacy initiatives. While these programs aim to teach users how to spot fake news, some interventions produce unintended consequences. In certain cases, increased exposure to examples of misinformation leads to heightened cynicism rather than improved discernment, reducing trust in legitimate information alongside false content.
To address these shortcomings, the study proposes a long-term educational framework built on three interconnected forms of literacy.
Foundational literacy focuses on language itself. By strengthening skills in etymology, linguistics, and translation, individuals gain a deeper understanding of how meaning is constructed and conveyed. This awareness reduces ambiguity, sharpens interpretation, and limits the space in which manipulation thrives.
Psychological literacy equips individuals to recognize cognitive biases and social influences that shape belief formation. Understanding how confirmation bias, emotional reasoning, and social pressure operate enables people to pause, reflect, and evaluate information more deliberately.
Emotional literacy addresses the affective dimension of disinformation. Manipulated content often targets fear, anger, and identity. By learning to recognize and regulate emotional responses, individuals become less susceptible to emotionally charged falsehoods and more capable of maintaining balanced judgment.
Together, these literacies form a holistic defense against AI-driven manipulation. Rather than reacting to each new technological threat, the framework aims to build enduring resilience that adapts across generations and technological change.
The study also draws focus to the role of influencers and content creators in shaping the information ecosystem. Given their impact on public opinion, especially among younger audiences, the research calls for greater responsibility and ethical awareness among those who produce and amplify viral content.
- READ MORE ON:
- AI disinformation
- deepfakes and democracy
- misinformation in the age of AI
- digital literacy and AI
- emotional literacy against fake news
- psychological manipulation online
- AI-driven misinformation risks
- fighting fake news education
- trust crisis in digital media
- AI and democratic resilience
- FIRST PUBLISHED IN:
- Devdiscourse

