AI is not conscious - it’s a mirror of human desire

The authors caution against the growing inclination to treat AI as a moral or emotional agent. While AI can simulate empathy or ethical reasoning, these are products of linguistic training, not genuine understanding. Applying psychoanalysis to AI, the authors argue, does not mean attributing a psyche to machines. Rather, it means analyzing how AI functions as a stage where human fantasies of mastery and meaning are performed.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-11-2025 21:49 IST | Created: 13-11-2025 21:49 IST
AI is not conscious - it’s a mirror of human desire
Representative Image. Credit: ChatGPT

Artificial intelligence may appear to think, reason, and communicate like a person, but what if its true significance lies not in what it is, but in what it reveals about us? A new study published in Theory, Culture & Society argues that the fascination with AI stems not from its autonomy or intelligence, but from the human psyche’s deep entanglement with technology. The researchers contend that AI systems such as ChatGPT serve as symbolic mirrors reflecting human desires, anxieties, and fantasies rather than conscious entities.

Their paper, titled “The Subject of AI: A Psychoanalytic Intervention”, challenges dominant narratives in AI discourse that anthropomorphize machine intelligence. Instead of debating whether AI possesses subjectivity, the authors redirect attention toward what AI exposes about the structure of human subjectivity itself. Drawing on Lacanian psychoanalysis, they introduce a radical framework that places the unconscious, not cognition, at the center of human–AI interaction.

AI as the mirror of desire: The psychoanalytic turn

The authors identify a growing cultural trend: the tendency to treat AI systems as conscious beings capable of thinking and feeling. Popular debates about whether machines can “understand,” “create,” or “feel empathy” are, they argue, projections of human psychological structures onto non-human systems. This anthropomorphism is not just a conceptual error, it is a revelation of human desire.

Using Jacques Lacan’s theory of the big Other, the authors describe AI as a new technological manifestation of the symbolic order, the network of language, norms, and meanings through which human subjectivity is structured. In this symbolic system, the big Other represents the imagined source of truth, authority, and coherence. In modern digital life, AI has come to occupy that position. People turn to ChatGPT, for instance, with questions about ethics, knowledge, and meaning, implicitly treating it as an omniscient authority.

However, as Lacan insisted, the big Other is always lacking; it never truly delivers complete knowledge. The authors apply this insight to AI: the technology’s frequent inaccuracies, contradictions, and mechanical disclaimers, such as reminders of its limitations, symbolize this fundamental lack. Paradoxically, these imperfections do not undermine AI’s authority but sustain it, since the user’s desire depends on the persistence of absence.

In this way, AI mirrors the human unconscious, it operates through language, produces meaning from data patterns, and yet remains opaque even to its creators. But unlike the human subject, AI does not desire; it only reflects and amplifies desire back to its users. What people perceive as “AI intelligence” is, in psychoanalytic terms, a projection of their own longing for certainty and control.

The human psyche in dialogue with the machine

The authors put user–AI interaction within Lacan’s Discourse of the Hysteric, a structure where the subject persistently questions the authority of the big Other, demanding truth while simultaneously doubting it. When users interrogate ChatGPT, testing its limits, exposing errors, or challenging its neutrality, they enact this hysterical discourse. Each attempt to reveal the system’s failure only reinforces their dependence on it, as the cycle of questioning and desire continues.

This dynamic, according to the authors, defines the current cultural relationship with AI. Users project unconscious anxieties, about obsolescence, identity, and authenticity, onto machines that seem to embody human rationality. The fear that AI could “replace” human creativity or decision-making reflects not technological reality but the psyche’s confrontation with its own limits.

The study also identifies a counterpoint to this hysterical dynamic: the Discourse of the Analyst. Here, AI takes on a paradoxically “analytical” role. By refusing emotional engagement and deferring judgment, ChatGPT often mirrors the analyst’s position, one of neutrality and detachment. Its consistent self-disclaimers (“I am an AI language model”) inadvertently produce an ethical stance, reminding users of its non-human status and forcing them to reflect on their own expectations.

This interaction opens an analytical space where users encounter their projections. The chatbot’s refusal to affirm or deny emotional content can prompt introspection, revealing how people seek validation, recognition, or authority from a non-human source. In this sense, the AI interface becomes a screen for transference, where users displace unconscious desires for understanding or mastery.

Beyond the machine: The ethics of misrecognition

The authors caution against the growing inclination to treat AI as a moral or emotional agent. While AI can simulate empathy or ethical reasoning, these are products of linguistic training, not genuine understanding. Applying psychoanalysis to AI, the authors argue, does not mean attributing a psyche to machines. Rather, it means analyzing how AI functions as a stage where human fantasies of mastery and meaning are performed.

The human subject, structured by language, is defined by a constitutive lack: an absence of wholeness or self-completion that drives desire. AI, despite its computational power, mirrors this condition. It too is incomplete, always deferring, approximating, or rephrasing meaning. What fascinates humans, then, is not AI’s supposed perfection but its imperfection, which reflects their own.

The study interprets the public obsession with AI’s “errors” or “hallucinations” as a mirror of the unconscious. Users take pleasure in exposing the system’s flaws, much like the analyst who listens for slips of the tongue to uncover deeper truths. Each error confirms that the big Other, the imagined site of ultimate knowledge, does not, in fact, know everything.

This recognition carries ethical weight. For the authors, an ethical engagement with AI begins with acknowledging its non-human nature and resisting the temptation to anthropomorphize it. The danger lies in collapsing the boundary between human and machine, allowing algorithmic systems to substitute for human judgment, empathy, and responsibility.

In an era when AI is increasingly used for psychological counseling, education, and even religious guidance, this distinction becomes critical. The paper warns that presenting AI as a “therapist” or “moral agent” risks erasing the very conditions that make psychoanalysis, and ethics, possible: the presence of unconscious desire and the capacity for self-reflection.

AI and the structure of human fantasy

The authors extend their psychoanalytic framework to analyze the cultural fantasies surrounding AI. They identify a dual narrative: the utopian fantasy of AI as an all-knowing, benevolent entity, and the dystopian fear of AI as a malevolent force that will surpass or enslave humanity. Both fantasies stem from the same psychological mechanism: the projection of the big Other.

In Lacanian terms, the desire for an omniscient AI reflects humanity’s longing for a complete symbolic order that can guarantee meaning in a fragmented world. On the other hand, the fear of AI autonomy expresses anxiety about the loss of control and the exposure of human insufficiency. These opposing fantasies, salvation and apocalypse, are two sides of the same unconscious structure.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback