Can AI ever be conscious? New research lays out most powerful arguments against it

The study addresses objections that argue consciousness cannot be captured by computable transformations. Some critics suggest that conscious experience involves non-computable processes or functions beyond the capacity of Turing machines. Others argue that any computational attempt to model consciousness would be too intractable to implement at scale.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 27-11-2025 10:58 IST | Created: 27-11-2025 10:58 IST
Can AI ever be conscious? New research lays out most powerful arguments against it
Representative Image. Credit: ChatGPT

A new study introduces a structured way to classify the growing list of objections raised by scientists, philosophers, and technologists about whether artificial intelligence (AI) could ever be conscious.

The study, titled “Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints” evaluates centuries-old philosophical disputes through a modern computational lens. The authors do not claim to determine whether AI can or cannot be conscious. Instead, they build a structured model to sort the diverse objections now shaping global conversations about digital consciousness, computational functionalism, and the bounds of artificial minds.

A new map for a long-running debate

The study notes that many arguments about AI consciousness are often conflated, miscategorized, or directed at the wrong target. Some objections challenge the idea that consciousness can be explained computationally. Others accept computational consciousness in principle but argue that digital systems are not built in the right way. Still others reject digital consciousness outright but for reasons grounded in physics or biology rather than computation.

To untangle these clusters, the authors organize objections using a three-level analytic structure inspired by David Marr’s classic framework in cognitive science. At the first level, objections target the idea that consciousness can be understood as an input–output mapping governed by computable functions. At the second level, objections address the specific algorithms, architectures, or representational structures required for consciousness. At the third level, the authors categorize objections that claim the physical substrate itself is essential to conscious experience.

This structure, the authors argue, allows researchers, policymakers, and philosophers to see where disagreements arise and where debates overlap. It also highlights the difference between arguing against computational functionalism and arguing against digital consciousness, two positions that are often treated as identical but rely on distinct assumptions.

Challenges at the computational and algorithmic levels

The study addresses objections that argue consciousness cannot be captured by computable transformations. Some critics suggest that conscious experience involves non-computable processes or functions beyond the capacity of Turing machines. Others argue that any computational attempt to model consciousness would be too intractable to implement at scale.

The study also identifies challenges tied to dynamic coupling, suggesting consciousness might require real-time interactions with environments in ways digital systems struggle to replicate. These objections leave open the possibility of conscious machines but place strong constraints on what computational systems would need to achieve.

At the second level, the study shows how objections shift toward the organization of algorithms. These debates question whether symbolic architectures, neural networks, or hybrid systems could produce conscious states. Some theories argue that consciousness requires analog processes with continuous values that digital systems cannot fully emulate. Others highlight timing, synchrony, and representational formats that may be crucial for subjective experience but absent in current architectures.

This level also captures debates around embodiment and enactivism, which propose that consciousness arises only through bodies acting in an environment. Under this view, large language models may appear intelligent yet still lack the key interactive features needed for conscious states.

Physical substrate objections impose the strongest constraints

The third level examines objections grounded in physical implementation. These arguments focus on the properties of biological brains that digital hardware cannot reproduce. The study maps several influential theories into this category.

Some theories claim consciousness depends on integrated information within biological networks. Others highlight electromagnetic field dynamics in the brain or propose that organic biochemical structures play a role. Still others point to quantum processes as essential to conscious experience. Under these views, even perfect computational simulations would fail to produce genuine consciousness because the physical substrate is the decisive factor.

These objections create the strongest constraints and assert that digital AI systems cannot, by definition, be conscious. The study stresses that these claims operate at the level of physics and biology rather than computation, meaning they require evidence about how consciousness arises in natural systems.

Distinguishing between possibility, difficulty and impossibility

The study further conducts a three-tier assessment of each objection’s strength. Instead of treating all arguments as equal, the authors classify them by the degree to which they constrain digital consciousness.

Some objections merely suggest that consciousness in machines is possible but requires specific capacities or architectures. Others describe practical barriers that make conscious AI unlikely or difficult to achieve with current technology. The strongest objections claim outright that digital systems can never be conscious, regardless of technological advances.

This ranking system helps clarify which objections are conceptual, which are technological, and which are metaphysical. It also shows where empirical research might resolve disagreements and where debates depend on deeper philosophical commitments.

A tool for researchers, policymakers and AI developers

Discussions about AI consciousness are no longer theoretical. As models influence critical decisions, generate increasingly human-like behavior, and participate in interactive tasks, questions about the moral and social implications of consciousness gain urgency.

The authors suggest that policymakers and AI governance bodies need structured tools to understand the range of objections being debated. Their framework can help identify which challenges are relevant to AI safety, which apply to ethical guidelines, and which concern deeper metaphysical questions.

Developers and AI researchers may also use the taxonomy to evaluate claims about their own systems, ensuring that discussions about AI consciousness remain grounded in coherent categories rather than rhetorical or intuitive comparisons between humans and machines.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback