AI may never achieve true consciousness under current paradigms


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-02-2026 09:06 IST | Created: 16-02-2026 09:06 IST
AI may never achieve true consciousness under current paradigms
Representative Image. Image Credit:

The race to build machines that think like humans has accelerated, fueled by advances in large language models and agent-based systems. Governments and technology firms are investing billions in the belief that more data, more computing power and more complex architectures will ultimately lead to artificial general intelligence. A new philosophical analysis now questions whether that belief rests on a flawed foundation.

In the study What Artificial Intelligence May Be Missing—And Why It Is Unlikely to Attain It Under Current Paradigms, published in Philosophies, the author argues that current AI systems simulate intelligence without possessing the defining features of life. The study asserts that consciousness, autonomous motivation and true understanding may not emerge from computation alone.

Simulation is not experience

The author’s key thesis is based on a philosophical divide between performance and lived experience. The author uses a metaphor that frames the entire argument: the difference between a motorcycle and a horse. A motorcycle may outperform a horse in speed and efficiency, but it is not alive. It does not grow, reproduce, experience fear or generate its own motivations. Likewise, AI systems may outperform humans in specific tasks, but they do not possess consciousness or intrinsic understanding.

Modern AI systems excel at pattern recognition, prediction and text generation. Large language models can compose essays, simulate conversation and produce responses that appear thoughtful and informed. Yet, according to The author, these capabilities amount to advanced simulation rather than genuine cognition.

The distinction is rooted in long-standing debates in philosophy of mind. The study revisits key concepts such as phenomenal consciousness, the subjective aspect of experience often described as what it is like to be something. AI systems can describe emotions, narrate personal stories and generate reflections on suffering or joy. However, The author argues that there is no evidence that they experience any of what they describe. Their outputs are the result of mathematical operations on tokens, not the expression of an inner life.

This argument aligns with the so-called hard problem of consciousness, which asks why subjective experience exists at all. The author maintains that increasing computational complexity does not necessarily bridge this gap. A system can produce behavior that resembles understanding without possessing understanding in any meaningful sense.

The paper also revisits the Chinese Room argument, which illustrates how a system can manipulate symbols according to rules and produce coherent output without grasping meaning. According to The author, contemporary AI systems operate in precisely this way. They process numerical representations derived from training data, but they do not comprehend what those representations signify.

Even confidence scores and probability estimates generated by AI models do not indicate self-awareness. These values reflect statistical properties of outputs rather than an internally accessible sense of knowledge or ignorance. A model may assign high confidence to an incorrect answer, yet it has no awareness of error. For the author, this absence of epistemic self-awareness marks a crucial boundary between simulation and experience.

Abduction, tacit knowledge and the limits of computation

The author challenges the assumption that AI systems replicate the full spectrum of human reasoning. Central to this critique is the concept of abduction, introduced by philosopher Charles Sanders Peirce as the creative leap that generates new hypotheses from incomplete data.

Deduction applies rules. Induction generalizes from patterns. Abduction, by contrast, invents possibilities that were not explicitly given. The author argues that current AI systems do not perform genuine abduction. They extrapolate from training data and optimize for statistical likelihood, but they do not generate truly novel hypotheses grounded in awareness of ignorance.

Human reasoning, the study notes, is also shaped by tacit knowledge, a concept developed by Michael Polanyi. Tacit knowledge refers to embodied, context-sensitive understanding that cannot be fully articulated in explicit rules. People know more than they can tell, drawing on lived experience and pre-reflective skills when navigating the world.

AI systems, even when opaque in their internal processes, do not possess tacit knowledge in this sense. The opacity of neural networks stems from architectural complexity and high-dimensional parameter spaces, not from embodied engagement with the world. For The author, this distinction underscores the gap between computational processes and lived cognition.

The study engages with contemporary debates about AI hallucinations and reliability, suggesting that the limitations of language models reveal deeper constraints. AI systems model patterns in data but do not explain them in a way grounded in understanding. They lack goals in the intrinsic sense and instead optimize predefined loss functions. What appears as intention is the product of externally assigned objectives.

The author challenges the prevailing reductionist view that intelligence will emerge automatically from greater computational scale. Data centers filled with silicon-based processors may create the illusion of thought, but the underlying operations remain mechanical. In principle, he argues, the same functional outputs could be achieved by an unimaginably large system of gears and levers performing equivalent computations. The question then becomes whether functional equivalence is sufficient for consciousness.

The study suggests that intelligence and consciousness may depend not only on complexity but on specific forms of organization that are characteristic of living systems. If so, simply scaling up digital architectures will not produce minds.

Autopoiesis and the organizational divide

The author discusses the concept of autopoiesis, introduced by biologists Humberto Maturana and Francisco Varela. Autopoietic systems are self-producing and self-maintaining. They generate and regulate their own internal organization through processes that arise from within the system itself.

Living organisms reproduce, metabolize, adapt and regulate themselves autonomously. They are causally closed in the sense that their regulatory structures are generated internally rather than imposed from outside. AI systems, by contrast, are externally designed, trained and maintained. They depend on human engineers, data pipelines and energy infrastructure. They do not arise from themselves.

The author characterizes current AI as heteropoietic, meaning created and sustained by external forces. This distinction, he argues, is not merely technical but ontological. It concerns the very nature of what a system is.

Living systems possess intrinsic motivation and goals that emerge from biological needs and survival dynamics. AI systems operate with objectives defined by programmers or by optimization criteria embedded in their training processes. Even when AI agents appear autonomous, their goals remain externally assigned.

The study acknowledges that some researchers argue there are no obvious technical barriers to implementing features associated with consciousness. However, the author counters that living systems exhibit open-ended creativity that may not be reducible to formal models. Biological organisms generate novel functions and structures beyond any pre-specified state space. This open-endedness resists algorithmic prediction.

The implications extend to debates over AGI. If intelligence in biological terms is inseparable from autopoiesis and subjective experience, then current approaches focused on transformer models, memory modules and agentic frameworks may be missing the core principle required for genuine intelligence.

The author does not claim that artificial consciousness is impossible in principle. He leaves open the possibility that future science could uncover organizational principles not yet understood. Consciousness might depend on specific material configurations or non-computational processes that current digital architectures cannot replicate. But under present paradigms grounded entirely in computation, he argues, true general intelligence remains out of reach.

The study also addresses ethical risks. If society confuses sophisticated simulation with real experience, it may begin to attribute moral status, rights or authority to systems that lack consciousness. At the same time, such confusion could erode appreciation for the complexity and fragility of living beings.

The paper calls for a deeper inquiry into the foundations of intelligence. Rather than asking only how to build smarter machines, researchers may need to ask what intelligence and consciousness fundamentally are. Without clarity on those questions, the path to AGI may be guided by assumptions that overlook the essence of life.

In an era defined by rapid AI deployment, The author’s analysis offers a counterweight to technological optimism. The study does not deny the practical value of AI. Like a motorcycle, a machine can be powerful and useful without being alive. Calculators already outperform humans in numerical precision, and AI systems can solve complex tasks at scale.

But usefulness, the study insists, should not be mistaken for equivalence with living minds. Until the principles underlying consciousness and self-organization are understood, the pursuit of artificial general intelligence under current computational paradigms may remain fundamentally constrained.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback