Generative AI risks reinforcing human supremacy, philosophers warn
The research challenges current academic and cultural narratives that frame generative AI as a fundamentally posthuman innovation. Through a systematic evaluation of how LLMs operate, the authors reveal that AI’s perceived agency often stems from superficial linguistic mimicry rather than genuine autonomy or embodied experience. This mimicry, they argue, reflects the logic of the Turing Test, passing for human, rather than achieving true ontological parity with the human or non-human other.
A new wave of academic scrutiny is reshaping how artificial intelligence is interpreted within ethical and philosophical frameworks. In a rigorous theoretical critique titled “Humanism strikes back? A posthumanist reckoning with ‘self-development’ and generative AI” published in AI & Society, researchers Sam Cadman, Claire Tanner, and Patrick Cheong-Iao Pang interrogate the increasingly accepted notion that generative AI represents a posthuman breakthrough. Instead, the study cautions that the rapid elevation of AI tools like ChatGPT risks reviving foundational tenets of Enlightenment Humanism and anthropocentric ideology, rather than transcending them.
The paper introduces a vital conceptual distinction between “post-dualist self-development” (PDSD) - a central tenet of posthumanist thought in which matter and machines are seen as agentic and evolving without reliance on human control and “technical self-development” (TSD), which describes machine learning systems like large language models that evolve based on massive datasets and unsupervised algorithms. The authors argue that conflating these forms of development creates a dangerous philosophical slippage. Without clarity, AI systems may be wrongly hailed as evidence of a posthumanist future, while covertly re-inscribing the human supremacy they purport to undermine.
Does AI truly represent a break from Humanist ideals?
The research challenges current academic and cultural narratives that frame generative AI as a fundamentally posthuman innovation. Through a systematic evaluation of how LLMs operate, the authors reveal that AI’s perceived agency often stems from superficial linguistic mimicry rather than genuine autonomy or embodied experience. This mimicry, they argue, reflects the logic of the Turing Test, passing for human, rather than achieving true ontological parity with the human or non-human other.
In posthumanist ethics, the goal is to dismantle the idealized, universalized figure of the rational, male, white human that underpins Western Humanism. However, the authors warn that generative AI, far from subverting this image, may be reinforcing it. By imitating abstract and disembodied patterns of human expression - drawn from internet data overwhelmingly shaped by Western norms - AI systems risk perpetuating the very values posthumanism seeks to dissolve. The illusion of novelty generated by these tools, they contend, is produced through pseudo-randomization, not emergent creativity. Consequently, generative AI becomes a powerful mechanism for reanimating Humanist structures under the guise of radical innovation.
How does anthropomorphism fuel ideological misdirection?
A key mechanism identified in the study is the human tendency to anthropomorphize machines. The authors argue that this vulnerability, intentionally exploited by AI developers and marketers, leads users to perceive machines as intelligent, creative, and even emotionally responsive, despite their operations being statistically deterministic and devoid of true affect or agency. This anthropomorphic transference, they suggest, is not merely misleading - it plays a critical role in re-legitimizing anthropocentric worldviews.
The research highlights how LLMs like ChatGPT construct responses by slicing, recombining, and rephrasing existing human language. These processes, while technically impressive, do not constitute autonomous thought or self-awareness. Nevertheless, they are presented and consumed as such, with outputs sometimes celebrated as “posthuman art” or “AI-generated literature.” In doing so, the technology draws focus away from its roots in commodified human data, rebranded as machine output.
Importantly, the authors point out that such anthropomorphism obscures the power asymmetries embedded in the training data itself. Generative AI systems are trained on massive corpora derived from dominant cultural, linguistic, and ideological systems - primarily white, English-speaking, and male-centric. Without critical interrogation, AI becomes a mechanism for preserving and amplifying the very social hierarchies posthumanism seeks to disrupt.
Can posthumanist theory adapt to the reality of unsupervised AI?
The study delivers a sharp internal critique of posthumanist scholarship, arguing that much of its recent engagement with AI has lacked technical precision. By collapsing PDSD and TSD into a single category of “machine agency,” some scholars have inadvertently supported the mainstream AI narrative that positions tools like ChatGPT as ethically transformative. The authors warn that this conceptual laxity risks subordinating posthumanist ethics to the marketing strategies of Big Tech.
To correct this course, the researchers propose a sequential paradigm where PDSD can only be meaningfully applied after a rigorous analysis of a system’s technical development under the TSD framework. This reframing would allow for more accurate assessments of AI’s socio-ethical impact, making visible the specific architectures, datasets, and anthropomorphic design elements that shape each system’s outputs.
Such a paradigm also creates space to distinguish between AI domains. For example, the protein-folding model AlphaFold2, which earned its developers a Nobel Prize, is based on homogeneous, biologically grounded data and performs a task with minimal ideological baggage. By contrast, ChatGPT’s linguistic outputs are soaked in cultural assumptions and reproduce patterns shaped by colonial, capitalist, and patriarchal histories.
The authors call for posthumanist scholars to reclaim their critical edge by embracing the nuance that TSD-PDSD analysis offers. Without such tools, posthumanism may inadvertently legitimize AI systems that not only fail to subvert human exceptionalism but actively reinforce it through scale, ubiquity, and the allure of machinic objectivity.
- FIRST PUBLISHED IN:
- Devdiscourse

