AI systems don’t understand truth and that’s becoming global risk

AI systems don’t understand truth and that’s becoming global risk
Representative image. Credit: ChatGPT

A new study argues that modern AI technologies, particularly large language models (LLMs), are not merely tools for processing information but are actively reshaping the concept of truth itself, reducing it to patterns of probability and linguistic coherence rather than objective or experiential reality.

In the study titled "AI and the Reduction of Truth: A Eucharistic Alternative," published in Religions, author Christopher M. Reilly examines how AI systems are redefining epistemology by prioritizing statistical prediction over meaning, context, and lived experience. The paper presents a critical philosophical and theological analysis, proposing that the rise of AI-driven knowledge systems risks flattening truth into a purely computational construct while displacing deeper human and relational dimensions of understanding.

Algorithmic knowledge and the erosion of meaning

AI systems operate on a fundamentally different understanding of truth than traditional human-centered frameworks. Large language models generate outputs based on probabilistic relationships between words, drawing from vast datasets to produce responses that appear coherent and contextually relevant. However, this process does not involve comprehension, intentionality, or an awareness of truth in any meaningful sense.

The research highlights that AI-generated content is often perceived as authoritative due to its fluency and structural consistency. This creates what the study identifies as a "truth effect," where the appearance of coherence is mistaken for factual accuracy or epistemic reliability. In practice, this can lead to the widespread circulation of information that is plausible but not necessarily true, reinforcing a shift toward surface-level validation rather than critical evaluation.

Traditional models of truth, whether grounded in empirical verification, philosophical reasoning, or experiential understanding, rely on processes of interpretation, context, and accountability. By contrast, AI systems reduce truth to statistical likelihood, privileging what is most probable within a dataset rather than what is most accurate or meaningful.

The study argues that this transformation is not neutral. As AI tools become intermediaries between users and information, they shape how knowledge is constructed and validated. Over time, this may erode the distinction between truth and representation, replacing it with a model in which information is judged primarily by its coherence and utility.

Displacement of human authority and the rise of synthetic epistemology

The research further explores how AI is altering the balance of authority in knowledge production. Historically, expertise has been rooted in human judgment, institutional credibility, and disciplinary rigor. The rise of AI introduces a new form of epistemic authority, one that is algorithmic, opaque, and largely detached from human accountability.

The study identifies this shift as the emergence of a "synthetic epistemology," where knowledge is generated through machine processes rather than human inquiry. In this framework, AI systems are not simply tools used by experts but actors that influence decision-making, interpretation, and understanding.

This transition raises questions about trust and legitimacy. While AI systems can process vast amounts of information and generate rapid responses, they lack the capacity for ethical reasoning, contextual awareness, and responsibility. Yet their outputs are increasingly integrated into workflows that require precisely these qualities.

The research points to a growing tension between efficiency and authenticity. AI systems excel at producing scalable, consistent outputs, making them attractive for applications such as content generation, data analysis, and decision support. However, this efficiency comes at the cost of depth, nuance, and relational understanding, elements that are central to human knowledge.

Dependency is also a matter of concern. As users rely more heavily on AI-generated information, they may become less engaged in the processes of critical thinking and verification. This could lead to a gradual weakening of intellectual autonomy, with individuals deferring to algorithmic outputs rather than actively interrogating them.

In this context, the paper suggests that the challenge is not simply technological but cultural. The widespread adoption of AI reflects broader shifts in how society values speed, convenience, and scalability over reflection, interpretation, and meaning.

A theological alternative: Reclaiming truth via relational understanding

In response to these challenges, the study proposes a theological framework as an alternative way of understanding truth in the age of AI. Drawing on Eucharistic theology, the research presents a model of truth that emphasizes relational presence, embodiment, and participation rather than abstraction and representation.

This perspective stands in contrast to the computational model of AI, which treats knowledge as data to be processed and optimized. The theological approach, by comparison, views truth as something that is encountered, experienced, and lived within a community. It is not merely a property of statements but a dynamic relationship between individuals, contexts, and shared realities.

The study argues that this framework offers a corrective to the reductionism of AI-driven epistemology. By re-centering truth in relational and experiential dimensions, it challenges the assumption that knowledge can be fully captured through data and algorithms. Instead, it highlights the importance of context, interpretation, and human engagement in the pursuit of understanding.

This does not imply a rejection of technology but rather a call for balance. The research suggests that AI systems can play a valuable role in supporting human knowledge, provided they are integrated within frameworks that preserve meaning, accountability, and ethical reflection.

The theological lens also stresses the value of embodiment in knowledge. Unlike AI systems, which operate in abstract digital spaces, human understanding is rooted in physical experience and social interaction. This embodied dimension is essential for grasping complex realities, particularly in areas such as ethics, culture, and identity.

By emphasizing these aspects, the study calls for a re-evaluation of how AI is deployed across sectors. It argues that technological innovation must be accompanied by philosophical and ethical reflection, ensuring that advancements in capability do not come at the expense of deeper human values.

Reframing the future of AI and truth

The rise of artificial intelligence represents not just a technological shift but a transformation in how truth is conceptualized and experienced. This transformation carries both opportunities and risks. On one hand, AI has the potential to enhance access to knowledge, streamline processes, and support decision-making across a wide range of domains. On the other hand, its reliance on probabilistic models and its lack of contextual awareness pose challenges for maintaining accuracy, integrity, and meaning.

Addressing these challenges requires a multidisciplinary approach. Technical solutions, such as improved model design and transparency mechanisms, must be complemented by broader efforts to strengthen critical thinking, ethical governance, and human-centered frameworks of knowledge.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback