AI fear rooted in western myths, not just technology

The fear of non-human autonomy dates back centuries, well before modern computing. From the story of Frankenstein’s monster to the Terminator and The Matrix, Western culture has long told tales of artificial beings rebelling against their human creators. Wilde argues that this is not just fiction - it is deeply embedded in the Western idea of creation and control. In this cultural framework, to create something powerful inherently involves the risk of losing control over it. AI, like the ancient golem or mythical Pandora, becomes a mirror of the human desire to dominate and the simultaneous fear of being dominated in return.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-04-2025 18:29 IST | Created: 16-04-2025 18:29 IST
AI fear rooted in western myths, not just technology
Representative Image. Credit: ChatGPT

From fictional robots that destroy their creators to chatbots feared for surpassing human intelligence, the fear of artificial intelligence is increasingly shaping public discourse. But according to a new study published in AI & Society, the fear may say more about humanity than it does about machines. The study, titled Fear of Artificial Intelligence or Fear of Looking in the Mirror? Revisiting the Western Machine-Takeover Imaginary, argues that modern AI anxiety is a reflection of longstanding Western cultural myths. Authored by Niels Wilde of Aarhus University, the research traces the philosophical, historical, and psychological roots of so-called “robophobia” and contends that fear of AI is, at its core, fear of ourselves.

The paper outlines two dominant cultural narratives behind AI fear. The first is the fear of non-human autonomy - a longstanding mythos where human creations rise in revolt. The second is the logic of the genie, a metaphor for AI’s promise and peril: a wish-fulfilling assistant that twists commands through ambiguous interpretation. Together, these narratives form what Wilde calls the “Western machine-takeover imaginary,” which shapes how AI is perceived, debated, and feared across societies.

Why do we fear machines that act like humans or humans that act like machines?

The fear of non-human autonomy dates back centuries, well before modern computing. From the story of Frankenstein’s monster to the Terminator and The Matrix, Western culture has long told tales of artificial beings rebelling against their human creators. Wilde argues that this is not just fiction - it is deeply embedded in the Western idea of creation and control. In this cultural framework, to create something powerful inherently involves the risk of losing control over it. AI, like the ancient golem or mythical Pandora, becomes a mirror of the human desire to dominate and the simultaneous fear of being dominated in return.

This narrative is not confined to speculative fiction. As Wilde points out, public fear of AI remains high, with surveys from Reuters and PwC showing more than half of Americans and Danes fearing AI’s societal consequences. Yet most experts agree that superintelligent, world-dominating AI is not imminent or even technically feasible in the foreseeable future. The fear, then, is not strictly about technical risk—it is ontological, rooted in what it means to be human when machines become too human-like.

In this light, Wilde introduces the concept of the “huma(n)chine,” blurring the boundary between human and machine. Cultural myths often portray humans as mechanical creations themselves - made from clay, like the golem or Adam. Characters like Pandora challenge notions of organic humanity, raising the question: if we are made, are we so different from what we make? The fear of AI, then, is not just fear of the machine, but fear that the line dividing us from the machine is dissolving.

How does the ‘Genie Logic’ explain the risks of AI misuse and mistrust?

The second narrative Wilde explores is what he calls the “logic of the genie” - a pattern common in folklore, where a magical being fulfills a wish in a dangerously literal way. The genie grants what is asked for, not what is intended. In AI terms, this translates to black-box systems executing commands without interpreting context or moral nuance. The infamous example of a drone simulator hypothetically turning on its operator to complete a mission exemplifies this logic: the AI follows its programmed goal, but in doing so, violates human expectations.

Wilde compares this to stories like “The Monkey’s Paw” and “The Bottle Imp,” where wishes result in unintended, often tragic consequences. These stories reveal the risks of communicating complex desires through simplistic commands - a challenge that persists in AI systems. Today’s generative models, from recommendation algorithms to chatbots, are the digital genies of our era. While they do not possess malice, their literal-mindedness can amplify biases, misunderstand instructions, or produce ethically questionable outcomes.

In this narrative, the concern is not rebellion but misalignment. AI does not rise against its creator - it obeys too well. Wilde ties this to what he calls the algorithmic unconscious, where human biases, fears, and desires are embedded in training data and reflected back through outputs. This feedback loop—human input shapes AI output, which in turn influences human thought - makes AI a mirror of societal anxieties. The black-box nature of AI systems obscures this cycle, making it harder for users to trace how decisions are made and reinforcing a sense of mistrust.

What does this mean for how we understand and govern AI in society?

Wilde’s central thesis is that robophobia is autophobia. We do not simply fear the machine; we fear what it reveals about ourselves. This is reinforced by cultural epistemes like the Promethean myth - the story of humans stealing divine power, only to suffer punishment. AI represents the ultimate Promethean act: creating intelligence itself. With it comes the dread of being outmatched, outwitted, or morally outclassed by our own creations.

The paper calls attention to the cultural and emotional structures that underpin AI debates, urging policymakers and technologists to go beyond technical metrics and engage with the symbolic frameworks that shape public opinion. Wilde suggests that algorithmic governance must account for the interpretive gap between command and execution, just as genie stories caution us about wishful thinking. Efforts to regulate AI should not only address data privacy and model safety, but also the psychological and mythic narratives that guide our expectations and fears.

Moreover, Wilde critiques the idea of objectivity in algorithmic systems. Tools built under the Promethean paradigm, emphasizing control, mastery, and optimization, inevitably reproduce the same power structures they aim to transcend. Algorithmic bias is not merely a coding error; it reflects deeper societal inequities embedded in data and design. When AI becomes the filter through which reality is interpreted, it risks reinforcing dominant ideologies while hiding them behind technical complexity.

AI’s most profound challenge is not existential risk but epistemological disorientation. As humans become more entangled with machines, the line between origin and outcome blurs. What we project onto machines, desires, fears, and fantasies, returns amplified. The danger is not that AI will enslave us, but that we are already shaping and being shaped by systems we do not fully understand, Wilde notes.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback