AI is not neutral but rooted in power, capitalism, and human reduction
A new philosophical study by Niclas Rautenberg of Universität Hamburg raises urgent questions about the trajectory of artificial intelligence (AI), claiming that today's AI boom is not a technological rupture but the latest phase in a centuries-old drive to turn the world and human life into measurable, controllable systems. The paper explores how modern AI reflects deeper intellectual traditions shaped by Edmund Husserl, Martin Heidegger, and Herbert Marcuse, offering a new critique of how machines are reshaping not just economies, but human self-understanding.
Published in AI & Society, the study titled "Artificial intelligence, calculative reason, and technical domination: lessons from Husserl, Heidegger, and Marcuse" claims that contemporary machine learning and generative AI systems are embedded in a long historical project aimed at rendering reality "computationally legible," with far-reaching political and social consequences.
AI as the culmination of a centuries-old drive to mathematize reality
The study traces AI's philosophical roots to early developments in Western science, particularly the shift from everyday practical knowledge to abstract, mathematical reasoning. Based on Husserl's analysis, the author argues that modern science initiated a process of "mathematization" that transformed how reality is understood.
In this framework, the world is no longer experienced primarily through lived, sensory engagement but through abstract models and calculations. Over time, this shift led to the belief that all aspects of reality, including human thought and behavior, can be quantified and predicted.
According to the study, contemporary AI represents the most advanced stage of this process. Machine learning systems operate by identifying statistical patterns in massive datasets, effectively translating complex human activities into mathematical relationships. Generative AI goes even further by producing text, images, and music, extending this logic into creative domains once considered uniquely human.
The paper argues that this transformation carries a hidden cost. As mathematical models gain authority, everyday knowledge and human experience are increasingly devalued. What cannot be measured or computed is often dismissed as subjective or unreliable.
This shift has broader social consequences. The study highlights how AI systems are often perceived as objective and unbiased, reinforcing trust in algorithmic decision-making across sectors such as hiring, healthcare, and governance. This perception, however, obscures the underlying assumptions and limitations of these systems.
The author warns that the growing reliance on AI risks replacing nuanced human judgment with simplified computational logic, narrowing how reality is interpreted and acted upon.
From calculation to domination: AI and the rise of a control-oriented worldview
Based on Heidegger's philosophy, the study moves beyond mathematization to examine the deeper logic driving AI development. It argues that the push to quantify the world is rooted in a broader desire for control and domination.
Heidegger's concept of "enframing" is central to this analysis. In this view, modern technology encourages people to see the world not as a complex, living environment but as a set of resources to be managed and exploited. Everything, including human beings, becomes part of a "standing reserve" available for use.
The study asserts that AI embodies this mindset in a particularly powerful way. Unlike earlier technologies, AI systems actively shape how people understand themselves and others. Through data collection and analysis, they transform human behavior into measurable inputs, reinforcing the idea that individuals can be optimized like machines.
Examples cited in the paper include wearable health devices that track bodily functions, AI-generated content that turns creativity into reproducible data, and chatbot interactions that redefine relationships in terms of efficiency and availability. These developments, the study argues, reflect a broader shift in how human life is valued.
Under this logic, qualities such as emotional depth, unpredictability, and individuality are increasingly seen as inefficiencies rather than strengths. The result is a form of self-alienation in which people begin to view themselves through the same calculative lens applied by machines.
The study also highlights the risk of this mindset becoming dominant. As AI systems become more integrated into daily life, they reinforce patterns of thinking that prioritize control, prediction, and optimization. Over time, this can limit alternative ways of understanding the world, making it harder to question or resist the underlying system.
This trend, as the author warns, could lead to a loss of critical agency, as individuals come to accept algorithmic logic as the default framework for decision-making.
Capitalism, AI, and the consolidation of technological domination
Additionally, Marcuse's critical theory places AI within the economic structures of modern capitalism. It argues that the development and deployment of AI cannot be understood in isolation from the systems that produce and benefit from it.
According to the paper, capitalism provides the conditions under which AI's calculative logic becomes socially dominant. The integration of scientific methods into industrial production, combined with the pursuit of efficiency and profit, creates a feedback loop in which technological innovation reinforces existing power structures.
AI plays a key role in this process by enhancing productivity, optimizing labor, and enabling new forms of surveillance and control. The study points to the rise of algorithmic management, targeted advertising, and data-driven decision-making as examples of how AI is embedded in economic systems.
These technologies not only increase efficiency but also shape human behavior. By influencing consumption patterns, work practices, and social interactions, they create what Marcuse described as a "one-dimensional" society, where alternative ways of thinking and living are marginalized.
The paper argues that this dynamic extends to the development of AI itself. The industry is dominated by a relatively small group of corporations and individuals, whose priorities and values are reflected in the systems they build. This concentration of power raises concerns about whose interests are being served and whose voices are being excluded.
The author also points out the role of AI in reinforcing existing inequalities. Training data often reflects dominant cultural perspectives, while marginalized communities are underrepresented or misrepresented. This can lead to biased outcomes and further entrench social disparities.
The benefits of AI are unevenly distributed. While some groups gain from increased productivity and convenience, others face job displacement, reduced autonomy, and increased surveillance. The result is a complex system in which technological progress is closely tied to economic and political power, making it difficult to separate innovation from its broader consequences.
A call for critical engagement, not technological rejection
Notably, the study does not advocate abandoning AI. Instead, it calls for a more reflective and politically aware approach to technology. The author argues that understanding AI's historical and philosophical roots is essential for addressing its current challenges. By recognizing the underlying assumptions driving its development, policymakers, researchers, and users can better assess its impact and explore alternative paths.
Collective action, including critical research, institutional resistance to unchecked technological adoption, and collaboration with affected communities are imperative. It suggests that meaningful change will require not only technical solutions but also broader social and political transformation.
- FIRST PUBLISHED IN:
- Devdiscourse