How ambiguity and control shape co-creative AI systems

The first paradox, ambiguity versus precision, captures the mismatch between human creativity and machine logic. Humans thrive on ambiguity, drawing meaning from incomplete or metaphorical ideas. AI, on the other hand, requires explicit and structured input. When artists or designers fail to articulate their intentions clearly, generative systems often return literal or irrelevant results. The study suggests that next-generation AI should include “ambiguity translators”, adaptive systems capable of iteratively refining vague human prompts into usable creative outputs.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-10-2025 09:45 IST | Created: 23-10-2025 09:45 IST
How ambiguity and control shape co-creative AI systems
Representative Image. Credit: ChatGPT

A new study explores the uneasy alliance between human creativity and artificial intelligence, exposing the inherent contradictions that shape their collaboration. As AI systems increasingly co-author art, design, and innovation, the study reveals that true creativity in the age of intelligent machines cannot be achieved by eliminating friction, but by understanding and managing it.

Published in Information, the study “Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration” proposes a conceptual framework built on five defining paradoxes that determine how humans and AI can work together creatively. The authors argue that modern tools such as ChatGPT, Midjourney, and Copilot, while transformative, have not yet mastered the complex psychological and social dynamics that make creativity possible.

When machines create: The challenge of shared authorship

The study tackles a key question shaping design, education, and innovation today: can AI be a true creative partner? The authors argue that while generative systems can mimic creativity through pattern recognition and recombination, they struggle with the human side of the process, ambiguity, emotion, and intuition.

Instead of framing these limitations as failures, the study identifies them as paradoxes inherent to human–AI collaboration. These paradoxes, ambiguity versus precision, control versus serendipity, speed versus reflection, individual versus collective, and originality versus remix, are not problems to solve but tensions to balance. The framework suggests that future co-creative systems must embrace these contradictions rather than attempt to eliminate them.

The first paradox, ambiguity versus precision, captures the mismatch between human creativity and machine logic. Humans thrive on ambiguity, drawing meaning from incomplete or metaphorical ideas. AI, on the other hand, requires explicit and structured input. When artists or designers fail to articulate their intentions clearly, generative systems often return literal or irrelevant results. The study suggests that next-generation AI should include “ambiguity translators”, adaptive systems capable of iteratively refining vague human prompts into usable creative outputs.

The second paradox, control versus serendipity, reveals the thin line between creative direction and unexpected discovery. Human creators often rely on chance to inspire breakthroughs, while AI’s generative randomness can introduce novelty without intention. The study proposes that users should retain “veto power” to curate, filter, and repurpose AI’s spontaneous suggestions, turning algorithmic unpredictability into guided exploration.

Designing for friction: Balancing human and machine strengths

While efficiency has driven most AI design, the researchers argue that speed and automation alone undermine reflection, which is essential to deep creative work. This forms the third paradox: speed versus reflection. AI systems can generate hundreds of ideas within seconds, but this rapid production risks weakening human critical judgment, a phenomenon the study calls “attentional deskilling.” The authors recommend systems that deliberately introduce pause points, encouraging users to evaluate and iterate rather than accept AI outputs uncritically.

The fourth paradox, individual versus collective, examines how generative AI systems trained on massive datasets reshape the boundaries of personal authorship. While humans seek originality and self-expression, AI draws from collective human knowledge. The tension arises when personalization collides with homogenization, AI outputs reflect cultural averages rather than unique perspectives. The authors propose that co-creative systems should allow users to calibrate the balance between personal style and collective knowledge, preventing creative convergence and bias reinforcement.

The final paradox, originality versus remix, encapsulates the philosophical core of the study. AI generates new content by remixing existing data, producing what the authors describe as “derivative originality.” The system’s creativity lies in combination, not invention. Humans, by contrast, generate meaning through context, purpose, and emotion. The authors argue that this relationship redefines the role of the human creator—from maker to curator. Rather than producing finished work, the creative human becomes the director of generative systems, selecting, refining, and contextualizing outputs to transform algorithmic novelty into cultural value.

Rethinking creativity: Ethics, design, and the human role

The study confronts the ethical and cognitive implications of shared creativity. It warns that over-reliance on generative AI may lead to cognitive atrophy, where users lose reflective and evaluative skills by deferring too much to machine-generated suggestions. The paper frames this as a new kind of “creative deskilling” that threatens to replace slow thinking and experimentation with algorithmic efficiency.

The authors also highlight pressing ethical concerns surrounding authorship and ownership. As co-creation becomes mainstream, the question of who “owns” AI-assisted work remains unresolved. The study calls for updated copyright and attribution models that recognize shared agency while protecting human intellectual labor. Moreover, it emphasizes that algorithmic bias remains a barrier to diversity in creative output. When AI is trained on culturally dominant data, it risks amplifying mainstream aesthetics and excluding marginalized voices.

In response, the authors advocate for explainable AI (XAI) features in creative systems, interfaces that reveal how and why certain outputs are generated. This transparency would help users understand the boundaries of machine reasoning and reclaim interpretive control. The study also recommends embedding ethical reflection into system design, ensuring that co-creative tools prioritize human intention, cultural sensitivity, and inclusion.

Path forward: Creativity as a dialogue between minds

The paper reframes human–AI collaboration as a dialogue of minds, one conscious, reflective, and value-driven; the other statistical, associative, and non-conscious. The authors conclude that co-creativity cannot be measured by productivity or novelty alone. It must be understood as an evolving negotiation between human intention and machine possibility.

In this vision, creativity is not a linear pipeline but a reciprocal exchange where both agents contribute differently. Humans bring narrative, ethics, and judgment; AI contributes speed, synthesis, and scalability. The creative act emerges in the friction between these contributions, not in their harmony.

The researchers propose that future systems should move away from “assistant” metaphors and toward “collaborator” architectures, where humans and AI share control dynamically. This will require reimagining interface design, workflow structures, and evaluation metrics in education, art, and professional industries.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback