What AI still lacks: the life process behind real cognition
What counts as cognition, and what makes an AI system intelligent? A new philosophical study argues that this old debate cannot be settled by looking only at human reasoning, symbolic thought or machine performance. Instead, it proposes a life-centered framework in which cognition is rooted in the organization of living systems, while intelligence is measured by how competently those systems solve problems under uncertainty.
The study, titled "Cognition and Intelligence in Natural and Artificial Systems" and published in Philosophies, examines competing traditions in cognitive science, philosophy of mind, biology and artificial intelligence, arguing that current AI systems can show engineered or derivative intelligence but should not yet be considered cognitive in the biological sense.
Cognition is not only a human mental process
Cognition and intelligence are widely used across science, medicine, education and technology, but they are often defined inconsistently. In some traditions, cognition and intelligence are nearly interchangeable. In others, intelligence is treated as a measurable capacity for reasoning or problem-solving, while cognition is limited to mental operations associated with human thought.
The author argues that this confusion is partly caused by a long-standing human-centered bias. Mainstream psychology and philosophy of mind have often identified cognition with mental processes such as perception, memory, reasoning, language, learning and decision-making. These processes are usually associated with human minds and, in some cases, with other animals that have nervous systems. Intelligence, in this view, is generally treated as a subset of cognition, measured through reasoning, learning, adaptation and problem-solving.
The study compares this with a life-centered perspective, which defines cognition as a property of living systems rather than as a special function of human minds. In this approach, cognition does not begin with language, consciousness or brains. It begins with life itself. Living systems sense, regulate, respond, coordinate and maintain themselves in relation to their environments. These processes are cognitive because they allow organisms to make sense of conditions that matter for survival, growth and reproduction.
This does not mean a bacterium thinks like a human or that a plant reasons like a person. The study is careful to treat cognition as a spectrum. Basal cognition appears in simple biological systems through sensing, memory-like regulation and adaptive response. More complex cognition appears in multicellular organisms, nervous systems, animal behavior, human language, culture and symbolic reasoning.
The key shift is that cognition is no longer restricted to conscious thought. It becomes an ongoing process of organism-environment interaction. Under this framework, a single cell can be cognitive in a minimal biological sense because it detects signals, regulates its internal state and responds to conditions that affect its continued existence. Humans represent a highly complex form of cognition, but not the only form.
Classical approaches often equated cognition with conscious thought or abstract symbol manipulation. Later computational models described the mind as an information-processing system. More recent embodied and extended cognition theories pushed back against purely mentalist views by arguing that human cognition depends on the dynamic interaction of brain, body and environment.
The author argues that life-centered theories go further. They do not merely say the human mind is embodied. They say cognition is grounded in living organization itself. This shifts cognitive science away from asking only what brains do and toward asking how living systems at different levels sense, regulate and solve problems.
The paper draws on biological and systems traditions that treat living systems as self-producing, self-maintaining and environmentally coupled. Within this framing, cognition is the process through which living systems sustain themselves and coordinate with their surroundings. Intelligence is then not a separate mysterious quality. It is the degree of competence with which cognitive systems solve problems under novelty, disruption and uncertainty.
This distinction is central to the article. Cognition is the ongoing life process. Intelligence is the effectiveness of that process when a system must act, adapt or solve a problem. In this view, intelligence appears in degrees, not as an all-or-nothing trait. A bacterium, a slime mold, a plant, an animal, a human and an AI system may all show different forms of problem-solving capacity, but their cognitive status is not the same.
Artificial systems can be intelligent without being biologically cognitive
The author argues that current AI systems can display intelligence in an engineered or derivative sense because they can solve tasks, classify patterns, generate outputs and optimize performance. But this does not make them cognitive in the same way living systems are cognitive.
The distinction turns on autonomy, embodiment, self-maintenance and goal formation. Living systems maintain their own organization. They regulate themselves in relation to their environments. Their goals are not merely assigned from outside but arise from their biological need to persist, survive, grow and reproduce. Current AI systems, by contrast, generally operate on human-produced data, human-defined objectives and external infrastructures that supply their goals and functions.
That does not make AI unimportant or unintelligent. The study recognizes that AI can solve synthetic or derivative problems with high competence. It can outperform humans on some narrow tasks and increasingly operates in complex environments. But its intelligence is not grounded in autopoietic life processes. It does not have biological self-maintenance or organismic sense-making.
This claim challenges a common public assumption that high performance equals cognition. A language model may generate fluent answers, a recommender system may predict consumer preferences and a robot may navigate a room. These are forms of intelligent function, but under the study's framework, they do not automatically constitute cognition in the life-centered sense.
The study also separates physical adaptation from cognition. Many non-living physical systems reorganize when conditions change. Fluids flow around obstacles, crystals form structures and mechanical systems respond to force. But such adaptation is not cognition because it lacks internally organized regulation aimed at maintaining a self-producing system. Cognition arises when a living system senses and regulates its interaction with the world in relation to its own continued organization.
This helps clarify why the paper does not simply call everything intelligent. It preserves a hierarchy. Physical systems may adapt. Living systems are cognitive because they regulate themselves in meaningful relation to their environments. Intelligent systems, natural or artificial, solve problems in goal-directed ways under changing conditions. Human intelligence is one highly developed case, but not the only one.
The study says artificial systems may require new categories in the future. Embodied AI, robotics, cyber-physical systems and synthetic biological technologies may gradually blur some boundaries, especially if they gain more autonomous self-regulation, persistent goals and environmental coupling. But current AI remains largely dependent on externally supplied goals and infrastructures.
This distinction is especially important as AI is increasingly described with human-like language. Systems are said to know, understand, reason, perceive, decide and learn. the author's framework urges more careful wording. AI can implement cognition-like functions, but that is not the same as being cognitive in the biological sense. It can perform intelligently without having the life-based organization from which natural cognition emerges.
The study also argues that older computational views of cognition need updating. Classical computation often treated cognition as symbol manipulation, associated with formal logic, rules and abstract processing. But living systems process information in embodied, physical and dynamic ways. The paper highlights newer computational approaches that include morphological computation, self-modifying systems, interactive computation and info-computational frameworks.
In this broader view, computation is not limited to digital symbol processing. Biological systems compute through their structure, form, dynamics and interactions. A body, a cell, a tissue or an organism can process information through physical organization. This helps bridge computational and embodied accounts of cognition rather than treating them as opposites.
The study uses the term info-computationalism to connect information, computation and cognition across natural systems. Information involves structured differences that matter to a system. Computation involves transformation of those differences. Cognition involves the living agent for which those differences become meaningful and functional. This framework is not a single metric of intelligence but a way to compare systems across levels of organization.
Framework could reshape debates in AI, biology and cognitive science
The proposed unifying framework organizes cognition and intelligence across natural and artificial systems. It does not reject human-centered theories outright. Instead, it places them inside a wider life-centered picture. Human cognition remains important because humans have language, culture, symbolic reasoning and reflective thought. But those capacities are built on deeper biological processes shared, in simpler forms, across living systems.
In cognitive science, it challenges the assumption that cognition should be studied mainly through human minds or brain-based models. In biology, it strengthens research programs that treat cells, tissues, organisms and ecologies as problem-solving systems. In artificial intelligence, it creates a sharper distinction between task competence and biological cognition.
The study cites examples from biology that support life-centered thinking. Slime molds can solve spatial problems and anticipate environmental patterns despite lacking nervous systems. Bacterial communities coordinate through chemical signaling. Regenerating organisms show distributed memory-like processes in tissues. These cases suggest that problem-solving and adaptive regulation are not limited to brains.
For medicine and bioengineering, that view could matter. If cells and tissues are understood as cognitive in a basal sense, then cancer, regeneration, immune response and development can be studied not only as biochemical processes but also as systems of communication, regulation and goal-directed behavior. This does not anthropomorphize cells. It treats them as agents with minimal biological agendas related to maintaining and reorganizing life.
For AI, the framework raises a different question from the usual debate over whether machines can think. The issue becomes whether artificial systems can develop the kind of autonomous, embodied, self-maintaining organization that grounds cognition in living systems. At present, the study's answer is cautious. AI can be engineered to solve problems, but it lacks the intrinsic biological organization that defines natural cognition.
That does not mean future systems will fit neatly into current categories. Hybrid human-AI systems already distribute problem-solving between people, institutions and algorithms. Embodied robots increasingly integrate perception, action and feedback. Synthetic biology may produce systems that combine engineered design with living self-organization. These developments may require more precise categories than today's split between natural and artificial intelligence.
The study also reframes intelligence as a matter of degrees. Instead of asking whether a system is intelligent or not, the framework asks what kind of problem it solves, what goals organize its behavior, how it handles uncertainty and whether its problem-solving is grounded in living self-regulation or external design. This allows researchers to compare intelligence across organisms, humans, machines and hybrid systems without collapsing them into one category.
The distinction between source of goals is especially important. In living systems, goals are tied to viability. Cells and organisms act in ways that maintain life processes. In current AI, goals are largely human-defined, including prediction, optimization, classification, generation or control. In hybrid systems, goals may be distributed across human users, institutions and machines. Intelligence can appear in all these contexts, but its grounding differs.
The study's framework also challenges anthropocentrism in public discussions of AI. Much of the debate asks whether AI is approaching human-level intelligence. the author's analysis suggests that this question may be too narrow. Human intelligence is one branch of a wider landscape of cognition and problem-solving. AI may match or exceed humans in some engineered tasks while still lacking biological cognition. Meanwhile, living systems often ignored by human-centered theories may display forms of cognition that are fundamental to life.
The paper acknowledges its limitations. It is a conceptual framework rather than an operational measurement system. It does not provide a single test for intelligence across all systems. Instead, it offers a grammar for comparing information, computation and cognition at different levels of organization. Operational definitions will still need to be developed for specific domains, such as cellular biology, robotics, synthetic systems or human-AI collaboration.
The framework gives researchers a way to avoid two common errors. The first is reducing cognition to human conscious thought. The second is treating any successful AI output as evidence of cognition. Between those extremes, the study proposes a layered view: cognition belongs to living organization, intelligence belongs to competent goal-directed problem-solving, and artificial intelligence is a powerful but derivative form of engineered problem-solving unless it acquires deeper self-maintaining organization.
- FIRST PUBLISHED IN:
- Devdiscourse
ALSO READ
-
AI's Double-Edged Sword: How Artificial Intelligence Fuels Cybersecurity Threats
-
Artificial Intelligence May Change How Financial Crises Emerge, ECB Study Finds
-
Artificial intelligence could become operating system of future healthcare systems
-
AI Tsunami: How Artificial Intelligence is Overwhelming Scientific Journals
-
Artificial Intelligence: A Key Battlefront in US-China Relations
Google News