AI that questions instead of answers: A paradigm shift in human-machine reasoning

Most artificial intelligence systems are engineered to eliminate ambiguity. Their core function is to reduce uncertainty, streamline decisions, and reinforce the most probable or popular outcomes. From content recommendations to legal judgments, AI has been tuned to emulate certainty and efficiency. But Deliu’s CD-AI model moves in the opposite direction.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-07-2025 12:30 IST | Created: 16-07-2025 12:30 IST
AI that questions instead of answers: A paradigm shift in human-machine reasoning
Representative Image. Image Credit: OnePlus

Romanian researcher Delia Deliu has proposed an AI model built not to solve problems or deliver fast answers, but to preserve mental discomfort. The model, named Cognitive Dissonance AI(CD-AI), aims to harness confusion, contradiction, and epistemic struggle as catalysts for deep reasoning.

Presented at the 2025 ACM Workshop on Human-AI Interaction for Augmented Reasoning (AIREASONING-2025-01), the study titled “Cognitive Dissonance Artificial Intelligence (CD-AI): The Mind at War with Itself. Harnessing Discomfort to Sharpen Critical Thinking, challenges decades of algorithmic tradition in favor of a model that intentionally defies clarity, resolution, and ease of thought.

By embedding cognitive dissonance into AI’s architecture, Deliu introduces a controversial yet intellectually ambitious vision for the future of human-machine interaction - one where discomfort is not a defect but a feature designed to sharpen the mind and foster democratic resilience.

What is CD-AI and why is it a radical shift?

Most artificial intelligence systems are engineered to eliminate ambiguity. Their core function is to reduce uncertainty, streamline decisions, and reinforce the most probable or popular outcomes. From content recommendations to legal judgments, AI has been tuned to emulate certainty and efficiency. But Deliu’s CD-AI model moves in the opposite direction.

At its core, CD-AI is designed to hold competing truths in balance. Rather than resolve contradiction, it sustains it. The system is architected to keep users within a state of cognitive tension by presenting conflicting viewpoints, exposing biases, and resisting definitive conclusions. The rationale, Deliu explains, is that genuine intellectual growth, and moral development, occur not in certainty, but in the discomfort of unresolved conflict.

In practice, CD-AI would function less like an assistant and more like a dialectical opponent. It would constantly pose alternative viewpoints, challenge assumptions, and encourage recursive self-questioning. Such a model, Deliu argues, could be especially useful in domains like ethics, law, politics, and science, where nuance and ambiguity are inherent rather than optional.

How can cognitive dissonance improve human reasoning?

The theoretical underpinning of CD-AI is based on well-documented psychological mechanisms. Cognitive dissonance - the internal conflict experienced when holding two incompatible beliefs - has long been known to drive re-evaluation, justification, and deeper thought. Deliu’s innovation lies in transforming this dissonance from a side-effect of thinking into a design goal for AI systems.

By maintaining this internal tension, CD-AI can activate what Deliu calls “epistemic humility,” prompting users to recognize the limits of their knowledge and to engage more rigorously with complex ideas. Rather than being fed algorithmically curated conclusions, users would have to work through competing narratives, all equally plausible, none offered as the singular truth.

Deliu also situates her model within a broader critique of the current epistemic culture. As AI increasingly shapes public opinion, the risk of producing intellectual echo chambers grows. CD-AI offers a defense mechanism against this, encouraging a critical distance from one’s own beliefs and the mainstream narratives being algorithmically reinforced.

In contrast to current generative models which frequently default to inoffensive generalities or pseudo-consensus, CD-AI is designed to disrupt that equilibrium. The model could help inoculate societies against the cognitive vulnerabilities that allow misinformation, authoritarian thinking, and ideological rigidity to flourish.

What are the ethical and practical risks of CD-AI model?

Despite its intellectual elegance, the CD-AI model is not without significant risks. As Deliu candidly acknowledges, engineering discomfort into machine intelligence could backfire. Prolonged cognitive dissonance can lead to mental fatigue, decision paralysis, or even emotional distress. The system’s potential to manipulate, mislead, or erode psychological well-being, if misused or poorly regulated, raises profound ethical concerns.

Additionally, the model assumes a user base capable of and willing to engage with deep epistemic struggle. In an online culture increasingly geared toward speed, simplicity, and instant gratification, it remains to be seen whether CD-AI could gain meaningful traction. There are also operational challenges: How should such an AI be evaluated, benchmarked, or monitored, given that success would be measured not in resolution but in resistance?

The author calls for a multidisciplinary effort to guide the responsible development of CD-AI. This would include philosophical ethicists, psychologists, AI engineers, and legal scholars collaborating on frameworks that balance user autonomy, safety, and intellectual rigor. Without careful oversight, she warns, the very ambiguity that makes CD-AI valuable could also be weaponized.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback