Comparing AI to evolution risks spreading scientific misinformation

The final section of the study delves into individuation - the process by which AI systems are presented as discrete, autonomous entities akin to biological organisms. In evolutionary computing, individuation is technical and serves specific algorithmic functions. In ALife and AGI safety research, however, individuation becomes metaphorical and political.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 22-04-2025 18:03 IST | Created: 22-04-2025 18:03 IST
Comparing AI to evolution risks spreading scientific misinformation
Representative Image. Credit: ChatGPT

A\ study published in AI & Society challenges the increasingly popular trend of framing artificial intelligence through evolutionary metaphors. The article, titled "Darwin in the Machine: Addressing Algorithmic Individuation Through Evolutionary Narratives in Computing," argues that using concepts like natural selection, fitness, and adaptation to describe AI systems may not only mislead public understanding but also mask ideological motivations behind certain scientific claims.

Drawing from three major strands of AI research, evolutionary computing, Artificial Life (ALife), and existential risk from artificial general intelligence (AGI), the study questions how evolutionary analogies are adopted, adapted, and communicated. While evolutionary models can provide useful inspiration for algorithmic design, the research warns that excessive or decontextualized borrowing from biological language can distort scientific accuracy and reinforce problematic assumptions about machine autonomy and intelligence.

How do evolutionary narratives emerge in AI research?

The application of evolution as a metaphor and model in computing is not new. Since Alan Turing’s early postulations about genetic algorithms in the 1940s, AI researchers have been fascinated by the potential of natural selection-inspired models to solve complex problems. Evolutionary computing, as the study shows, emerged alongside the modern synthesis (MS) of evolutionary biology, leveraging population-based approaches to optimize algorithms through selection, mutation, and fitness evaluation.

These methods are not symbolic; they play a critical functional role in improving candidate solutions. However, the study underscores that researchers in this domain are generally cautious, explicitly stating that they draw inspiration from, rather than aim to replicate, biological evolution. Here, evolution functions more as a mathematical heuristic than a literal natural process. Yet, this clarity fades in other subfields.

In contrast, the Artificial Life movement of the 1980s and 1990s adopted a more ambitious stance. ALife researchers attempted to model digital organisms with the goal of simulating life itself, often blurring the lines between biological and artificial systems. Some even envisioned ALife as contributing back to evolutionary science. This broader scope attracted interdisciplinary critique, as early ALife models lacked distinctions between genotype and phenotype, and relied heavily on romanticized visions of nature. Critics noted that such portrayals risked turning scientific exploration into speculative fiction, particularly when claims were not clearly separated from metaphorical intent.

The most consequential use of evolutionary narrative, according to the study, occurs in the contemporary AI safety and existential risk literature. This field, largely driven by transhumanist thinkers and organizations affiliated with Effective Altruism, invokes the Red Queen Hypothesis and Universal Darwinism to argue that AGI could evolve beyond human control. Here, evolution is used not as a design principle but as a predictive narrative - an allegory for future conflict between intelligent species. The premise is that machines, once more intelligent than humans, will inevitably outcompete them in a zero-sum struggle for dominance.

The study highlights how this narrative draws on a simplified view of evolution, one that prioritizes intelligence as the apex trait and assumes natural selection always favors optimization and superiority. These premises are often left unchallenged in the literature, despite being based on questionable evolutionary assumptions. Intelligence, as the paper reminds readers, is a complex and contested trait, and evolutionary processes are neither linear nor deterministic.

What happens when biological narratives are decontextualized?

The study focuses on the concept of decontextualization - the process of stripping evolutionary language from its biological roots and reapplying it in computing contexts without adequate qualification. While simplification is sometimes necessary for metaphorical use or public engagement, the danger arises when decontextualized narratives are mistaken for scientific truth.

The research outlines three major areas where this misalignment is most pronounced: optimization, prediction, and intelligence. In computing, optimization is a core objective, but biological evolution does not guarantee optimal outcomes. Many organisms survive not by becoming the best but by being good enough. Similarly, using evolutionary theory as a predictive tool, especially on macro scales such as projecting AGI timelines, contradicts the largely historical and stochastic nature of real evolutionary processes. Lastly, the privileging of intelligence as the ultimate evolutionary advantage overlooks the success of countless less cognitively complex organisms, like bacteria or fungi.

By reframing intelligence as a selective advantage that machines could wield against humans, these narratives risk reproducing outdated, even colonial, ideologies of cognitive superiority. The study draws a direct line between nineteenth-century thinkers like Alfred Russel Wallace, who viewed intellectual dominance as justification for European colonialism, and modern AGI literature, which often portrays superior intelligence as both a threat and an evolutionary inevitability.

How do evolutionary narratives shape public understanding of AI?

The final section of the study delves into individuation - the process by which AI systems are presented as discrete, autonomous entities akin to biological organisms. In evolutionary computing, individuation is technical and serves specific algorithmic functions. In ALife and AGI safety research, however, individuation becomes metaphorical and political. Machines are endowed with lifelike traits, capable of adaptation, reproduction, and even survival competition. This biomorphization enables researchers and institutions to tell compelling stories about machine evolution, sometimes without acknowledging the ideological baggage embedded in these comparisons.

The study argues that such narratives do not emerge in a vacuum. As AI research increasingly engages public audiences, the way evolutionary metaphors are used can profoundly shape how people think about technological change, agency, and control. Corporate and ideological actors have a vested interest in promoting AI systems as inevitable, autonomous, and beyond human regulation. By naturalizing AI through the language of evolution, these actors gain legitimacy and shield themselves from scrutiny.

The paper calls for renewed cross-disciplinary engagement between evolutionary scientists and computing researchers. It warns that failing to interrogate how evolutionary theory is adapted for technological storytelling risks legitimizing unfounded claims and perpetuating misleading visions of the future. 

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback