Digital human twins don’t think, they imitate and that’s the problem

The paper challenges the dominant “mirror metaphor” that defines digital human twins as faithful replicas of human behavior or internal states. This metaphor assumes that the closer a digital system comes to copying human data, the more capable it becomes. According to the authors, this assumption overlooks a critical distinction between representation and operation.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 30-12-2025 19:15 IST | Created: 30-12-2025 19:15 IST
Digital human twins don’t think, they imitate and that’s the problem
Representative Image. Credit: ChatGPT

Digital human twins are increasingly used across virtual environments, yet researchers warn that most are misclassified as intelligent systems. A new study finds that many so-called digital human twins merely reproduce predefined behaviors and fail when conditions change. The research identifies environmental interaction and adaptive learning as the decisive factors that separate functional digital twins from static simulations.

The study “What Makes a Digital Human Twin More Than a Simulation? A Computational-Ecological Stance,” published in AI & Society, reframes how digital human twins should be evaluated, designed, and understood, with implications for artificial intelligence, virtual environments, and human–machine interaction.

Why realism alone fails to create operational digital twins

The paper challenges the dominant “mirror metaphor” that defines digital human twins as faithful replicas of human behavior or internal states. This metaphor assumes that the closer a digital system comes to copying human data, the more capable it becomes. According to the authors, this assumption overlooks a critical distinction between representation and operation.

Simulations that rely heavily on internal models and scripted responses often perform well under predefined conditions but break down when environments change. Minor variations in context, task structure, or environmental constraints can lead to failure because the system lacks the ability to adapt through direct interaction. In these cases, realism becomes a liability rather than an asset, as complex internal representations increase brittleness instead of resilience.

The study points out that this limitation is structural rather than technical. No amount of additional data or higher resolution modeling can fully anticipate the range of situations encountered in dynamic environments. As a result, digital human twins designed primarily as simulations remain dependent on human intervention, frequent recalibration, and narrow use cases.

To move beyond this ceiling, the authors propose a shift toward a computational-ecological perspective. This approach draws on ecological psychology, which emphasizes perception-action coupling, and reinforcement learning, which focuses on learning through interaction. In this view, intelligence emerges not from internal world models but from an agent’s ability to detect and exploit regularities in its environment.

Under this framework, a digital human twin does not need to internally represent every aspect of the world. Instead, it must be embedded in an environment structured by stable laws and constraints, allowing it to learn which actions are possible, effective, or ineffective through experience.

Operational coupling defines the boundary between simulation and autonomy

The study introduces operational coupling, the degree to which a digital human twin is dynamically linked to its environment through feedback loops that support perception, action, and learning. The stronger this coupling, the less the system relies on pre-scripted responses and the more it behaves as an operational agent.

To clarify this distinction, the authors introduce the Operational Autonomy Continuum, a five-level framework that categorizes digital human twins based on how they interact with their environments. At the lowest levels, systems operate as scripted simulations, executing predefined actions and failing when conditions deviate from expectations. These systems have minimal coupling and no capacity for adaptation.

At intermediate levels, digital human twins can adjust their behavior within limited bounds. They may tolerate minor variations in environmental conditions but still depend heavily on designer assumptions. While more flexible than pure simulations, these systems remain fragile in unfamiliar contexts.

At higher levels, digital human twins demonstrate robust operational coupling. They learn from experience, adapt their means of achieving goals, and transfer knowledge across related environments. Importantly, this adaptability does not imply independence in goals. The study is explicit that digital human twins remain goal-dependent systems whose objectives are defined by humans.

The distinction lies in independence of means. A highly coupled digital human twin can discover new ways to achieve assigned goals when circumstances change. This capacity allows it to function reliably in environments that cannot be exhaustively modeled in advance.

The authors stress that operational autonomy should not be confused with moral or legal autonomy. The framework is descriptive rather than normative, offering a way to assess functional capability without implying personhood, agency, or responsibility. Accountability remains firmly with human designers and operators.

Rethinking perception, affordances, and non-human interaction

The study extends ecological affordance theory beyond human perception. Traditionally, affordances describe the action possibilities that an environment offers to a perceiving organism. The authors argue that this concept can be generalized to interactions between digital artifacts.

The paper introduces the idea of artifact-to-artifact affordance, describing how digital agents can detect and exploit action possibilities in other digital systems without human mediation. In complex virtual environments populated by multiple agents, tools, and structures, this capability becomes essential for coordination and stability.

By shifting focus away from human-centered perception, the study reframes digital environments as ecosystems of interacting computational entities. In such ecosystems, digital human twins are not isolated replicas of humans but participants in structured environments governed by rules, constraints, and regularities.

This perspective aligns with object-oriented ontology, which treats non-human entities as having relational capacities independent of human interpretation. Applied to digital human twins, this means evaluating systems based on how effectively they interact with their environments rather than how accurately they mimic human behavior.

The authors illustrate this shift through conceptual examples involving agents navigating virtual spaces. These examples show how learning-based agents can develop stable behaviors through repeated interaction, even when they lack detailed internal models of their surroundings. The emphasis is on performance under variation rather than fidelity to human cognition.

Implications for design, governance, and evaluation

The study’s arguments carry significant implications for how digital human twins are developed and deployed. First, it suggests that design priorities should shift away from exhaustive representation and toward environmental structure. Well-designed environments with stable constraints can support more robust agent behavior than complex internal models alone.

Second, the framework provides a basis for evaluating digital human twins beyond surface realism. Instead of asking how human-like a system appears, researchers and developers can assess where it falls on the operational autonomy continuum. This shift enables more transparent comparison of systems and clearer communication about capabilities and limitations.

Next up, the study raises governance questions that extend beyond technical performance. As digital human twins acquire learning histories and behaviors that diverge from their human counterparts, issues of identity, control, and oversight become more complex. While the paper does not address legal or ethical status directly, it highlights the need for clearer accountability mechanisms as systems become less predictable through interaction.

The framework applies primarily to virtual environments and does not assume physical-world deployment. It also avoids claims about consciousness, intentionality, or rights. Instead, it provides a conceptual tool for understanding functional transitions that are already occurring in simulation-based systems.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback