From assistants to partners: Why Human–AI relationship is still undefined


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 31-01-2026 18:44 IST | Created: 31-01-2026 18:44 IST
From assistants to partners: Why Human–AI relationship is still undefined
Representative Image. Credit: ChatGPT

Governments and organizations are racing to regulate artificial intelligence, but a new academic review suggests they may be doing so without a stable understanding of how humans and AI actually interact. The study finds that research on human–AI relationships is expanding rapidly yet remains fragmented, complicating efforts to define accountability, oversight, and ethical responsibility.

The findings are presented in Mapping Human–AI Relationships: Intellectual Structure and Conceptual Insights, published in Technologies. Based on a bibliometric analysis of 4,093 peer-reviewed studies, the research maps five dominant but weakly connected themes and concludes that the absence of a shared conceptual framework is now a structural risk for AI governance.

Explosive growth, weak integration across human–AI research 

The study shows that research on human–AI relationships has expanded exponentially since 2020, with publication volumes more than doubling in a short period. This surge closely follows the rise of deep learning and, more recently, generative AI systems that can produce text, images, and code. As AI systems have become more autonomous and interactive, academic focus has shifted from traditional human–computer interaction toward more complex forms of collaboration between humans and intelligent agents.

Despite this growth, the authors find that the field remains structurally immature. Using bibliometric co-word analysis, the study maps how key concepts co-occur across thousands of papers. This approach reveals five major thematic clusters that dominate the literature: Human–AI Interactions, Teaming and Augmentation, Human–AI Collaboration, Conversational AI, and Ethics and Responsibility.

Among these, Human–AI Interactions emerges as the most central theme, connecting research on explainable AI, decision support, human-in-the-loop systems, and interpretability. Closely linked are Teaming and Augmentation, which focuses on trust, shared decision-making, and hybrid human-AI teams, and Human–AI Collaboration, which increasingly centers on generative AI, co-creation, and creativity.

However, the study finds that these core themes lack strong internal cohesion. They function as broad, catch-all categories rather than well-defined research programs. At the same time, Conversational AI and Ethics and Responsibility appear as highly developed but relatively isolated clusters. Conversational AI research concentrates on chatbots, user experience, personalization, and emotional interaction, while ethics-focused research addresses trustworthiness, privacy, responsible AI, and sustainability. Both areas show strong internal consistency but limited integration with broader human–AI collaboration research.

This structural pattern suggests that researchers are advancing in parallel silos rather than building cumulative knowledge. The absence of any dominant “motor theme” indicates that the field has yet to establish a unifying framework capable of organizing theory, evidence, and application. According to the authors, this helps explain why organizations continue to struggle with inconsistent AI adoption strategies despite a growing evidence base.

Four archetypes define how humans and AI actually work together

To address this conceptual gap, the study proposes a new framework that classifies human–AI relationships based on two fundamental dimensions: the level of AI autonomy and the degree of human involvement in decision-making and control. From this model, the authors identify four archetypes of intelligence that capture how humans and AI systems interact in real organizational settings.

Assisted intelligence represents the most conservative configuration. In this model, AI systems operate as decision-support tools under continuous human supervision. Humans retain full authority, while AI assists with data analysis, pattern recognition, and recommendations. This archetype aligns with explainable AI and human-in-the-loop approaches, particularly in high-stakes fields such as healthcare, finance, and safety-critical systems. The study notes that assisted intelligence is widely adopted but limited in its capacity to drive transformation, as it prioritizes control over innovation.

Augmented intelligence moves further toward collaboration by using AI to enhance human judgment rather than merely support it. Here, AI systems proactively contribute insights, while humans maintain conditional oversight. This model is common in complex decision-making environments where speed, scale, and uncertainty exceed human cognitive limits. However, the authors highlight mixed evidence regarding its effectiveness. While augmentation can improve individual performance, hybrid human–AI teams often fail to outperform the best human or AI agents alone, largely due to trust issues, poor interface design, and organizational resistance.

Symbiotic intelligence represents the most ambitious form of human–AI collaboration. In this archetype, humans and AI systems share agency, adapt to one another, and co-create outcomes through continuous feedback loops. This configuration is closely associated with generative AI, creative work, and innovation-driven environments. The study identifies symbiotic intelligence as central to the future of human-AI research, but also as the most challenging to govern. Shared control raises complex questions about accountability, transparency, and ethical oversight that existing organizational structures are poorly equipped to handle.

Substituted intelligence sits at the opposite end of the spectrum, involving full automation with minimal or no human involvement. AI systems operate autonomously, making decisions based on predefined objectives and learned patterns. While this model can deliver efficiency gains in low-uncertainty contexts, the authors warn that it poses serious risks when applied to complex social or ethical domains. Concerns include opacity, bias, loss of human accountability, and over-reliance on algorithmic authority.

These four archetypes are not stages in a linear progression. Organizations often deploy all four simultaneously across different functions. Routine tasks may be automated using substituted intelligence, while strategic decisions rely on augmented or assisted intelligence, and innovation efforts draw on symbiotic collaboration. Understanding this coexistence is key to avoiding one-size-fits-all AI strategies.

Why conceptual clarity now matters for AI governance and adoption

The study highlights why the lack of shared understanding around human–AI relationships has become a pressing issue. Industry frameworks promoted by major consulting firms and technology companies often use similar language but embed very different assumptions about AI’s role. Without conceptual clarity, organizations risk misaligning AI systems with human workflows, ethical norms, and strategic goals.

The authors argue that many current AI failures stem not from technical limitations but from poorly defined human–AI roles. When AI autonomy increases without corresponding changes in governance, trust erodes. When human oversight is maintained without adapting workflows, efficiency gains disappear. The resulting tension contributes to phenomena such as automation bias, algorithm aversion, and inconsistent performance across teams.

Ethics and responsibility, while prominent in academic discourse, are often treated as add-ons rather than integrated design principles. The study’s findings suggest that ethical considerations must be embedded across all human–AI archetypes, not confined to isolated research silos. Issues such as transparency, accountability, and participatory design are as relevant to collaborative creativity as they are to automated decision systems.

The study also places human–AI research within the broader shift toward Industry 5.0, which emphasizes human-centric, sustainable, and resilient systems. From this perspective, the goal is not to replace human intelligence but to manage cognitive interdependence strategically. AI adoption, the authors suggest, should be evaluated not only in terms of efficiency but also in terms of how it reshapes human skills, agency, and organizational learning.

Future research, as the authors stress, must move beyond abstract debates and examine how different human–AI configurations perform over time in real environments. Without this evidence, the gap between AI’s technical capabilities and its social integration is likely to widen.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback