AI may be redefining what it means to be human
A new study argues that the most profound impact of AI is not whether machines can replicate human intelligence, but how the delegation of thinking processes to machines is quietly transforming human agency itself.
The study, titled "Epistemic Automation and the Deformation of the Human: Artificial Intelligence and the Reconfiguration of Theological Anthropology," published in Religions, shifts the debate away from the familiar question of whether machines can think like humans. Instead, it focuses on how human capacities such as judgment, interpretation, and moral reasoning are increasingly outsourced to algorithmic systems, raising deeper concerns about the future of human identity and responsibility.
AI is reshaping human knowledge and authority
The research introduces the concept of epistemic automation, defined as the systematic transfer of knowledge-related functions from humans to computational systems. These functions include observing, interpreting, evaluating, and making judgments, tasks that have historically been central to human cognition and identity.
Unlike earlier tools that supported human thinking, modern AI systems are beginning to replace parts of the thinking process itself. In practical terms, this means that decisions in fields such as healthcare, education, and governance are increasingly influenced or even determined by algorithmic outputs. When a doctor relies on AI for diagnosis, a judge considers algorithmic risk scores, or a student uses AI-generated content, the role of human judgment is no longer primary.
The study argues that this shift is not simply technological but deeply anthropological. Human beings are becoming secondary participants in knowledge production, often evaluating outputs generated by machines rather than engaging directly with the underlying problem. This transformation alters the structure of human agency, as individuals move from being active knowers to overseers of automated systems.
While AI systems do not possess human-like consciousness or moral agency, they increasingly function as sources of authority in decision-making processes. Their outputs are treated as reliable and actionable, even though they lack accountability, intentionality, or participation in human social structures.
This redistribution of authority has far-reaching consequences. In traditional human societies, knowledge is built through relationships, expertise, and shared practices. AI systems, by contrast, derive their authority from performance metrics such as accuracy and efficiency. As these systems gain influence, the basis of trust in knowledge is shifting away from human communities toward technical systems.
Fluency without understanding raises new risks
Modern AI systems can produce language that appears coherent, contextually appropriate, and even insightful. However, this fluency does not reflect genuine comprehension or intentional reasoning.
This difference creates a new epistemic condition. In human communication, fluency has often been used as a proxy for understanding. When someone speaks clearly and convincingly, they are typically assumed to grasp the subject. AI disrupts this assumption by producing fluent outputs without any underlying awareness or meaning.
The study warns that this decoupling could redefine how knowledge is evaluated. If output quality becomes the primary measure of success, the deeper processes of understanding and interpretation may be undervalued or ignored. Over time, this could erode the practices that cultivate critical thinking and intellectual engagement.
In fields such as theology, law, and education, interpretation is not just about producing answers but about shaping the individual who engages in the process. When interpretive work is delegated to AI, the formative aspect of these practices is diminished.
The research highlights the risk that human agents may become dependent on machine-generated outputs, reducing their capacity for independent reasoning. This is particularly concerning in areas that require moral judgment or contextual sensitivity, where algorithmic systems may lack the nuance needed to address complex situations.
The gap between fluency and understanding also has broader cultural implications. As AI-generated content becomes more prevalent, it may reshape societal expectations of knowledge, privileging speed and efficiency over depth and reflection. This shift could lead to a gradual redefinition of what it means to know something.
Delegated cognition could deform human agency
Furthermore, the study introduces the concept of agency deformation to describe how AI is altering the structure of human action and responsibility. This deformation does not eliminate human agency but reshapes how it operates, often in subtle and cumulative ways.
One area of concern is attention. As AI systems generate summaries, recommendations, and analyses, individuals may engage less directly with the world around them. Instead of interacting with primary sources or experiences, they rely on algorithmic representations. This shift can weaken the depth and quality of human engagement.
Responsibility is another critical dimension. When decisions are influenced by AI, it becomes harder to determine who is accountable for outcomes. If an algorithm contributes to a medical error or a biased judgment, responsibility is distributed across designers, users, and systems. This diffusion complicates ethical and legal frameworks that depend on clear lines of accountability.
The study also identifies a loss of receptivity as a key risk. In many intellectual and spiritual traditions, knowledge involves openness to new insights and a willingness to engage with complexity. When AI systems provide quick and definitive answers, this openness may be reduced. The process of wrestling with uncertainty, which is essential for growth and understanding, is replaced by reliance on pre-structured outputs.
Another critical dimension is the mimetic effect of AI on human behavior. Humans learn not only by acquiring information but by imitating models. As AI systems become dominant sources of knowledge, they may influence not just what people know but how they think. This could lead to homogenization of thought patterns, as individuals align their reasoning with algorithmic outputs.
The study suggests that AI may also function as a form of epistemic scapegoat, absorbing responsibility for decisions and reducing the burden on human agents. While this may increase efficiency, it risks weakening the sense of responsibility that underpins ethical action.
Theological and societal implications demand a rethink
The findings point to a need for a fundamental rethinking of how human identity is understood in the age of AI. The study argues that theological anthropology, the field concerned with the nature of the human person, must move beyond reactive responses to technological change and develop a proactive framework for understanding these transformations.
Instead of focusing on whether AI can replicate human qualities, the research calls for attention to the conditions under which human capacities are exercised. This includes examining how educational systems, professional practices, and social institutions are being reshaped by AI.
A key challenge is preserving the conditions for epistemic formation. Human capacities such as judgment, discernment, and interpretation are not innate but developed through practice and engagement. If these practices are replaced by automated systems, the ability to cultivate these capacities may decline.
The study also emphasizes the importance of maintaining epistemic vulnerability. Human knowledge is inherently limited and dependent on others, a condition that fosters humility and openness. AI systems, designed to minimize uncertainty and maximize performance, may obscure this vulnerability, leading to overconfidence in machine-generated outputs.
Practically, the research highlights the need for intentional design and regulation of AI systems. Rather than allowing technology to reshape human practices by default, institutions must ensure that AI supports rather than replaces human engagement. This includes maintaining human oversight, encouraging critical evaluation of AI outputs, and preserving spaces for independent reasoning.
- FIRST PUBLISHED IN:
- Devdiscourse