Big tech and AI drive subtle shift toward digital authoritarianism
Artificial intelligence (AI) is no longer just a technical infrastructure supporting economic efficiency and digital convenience. According to new academic research published in the journal AI & Society, it is increasingly functioning as a political force that reshapes power, weakens democratic institutions, and amplifies authoritarian tendencies across Western societies.
The study, titled Technofascism: AI, Big Tech, and the New Authoritarianism, systematic analysis of how artificial intelligence, corporate power, and contemporary political dynamics intersect to produce what the author defines as technofascism. Drawing on classical fascism theory and modern political philosophy, the research argues that AI is not politically neutral and that its deployment increasingly mirrors historical fascist patterns in updated, digitally mediated forms.
How artificial intelligence enables a new mode of authoritarian control
The research argues that AI facilitates authoritarian power not through overt repression or mass violence, but through subtle, pervasive, and largely invisible mechanisms. Unlike twentieth-century fascism, which relied on public spectacle and physical coercion, technofascism operates through data extraction, algorithmic governance, emotional manipulation, and behavioral prediction. These tools allow power to be exercised quietly, continuously, and at scale.
At the center of this transformation is algorithmic governance. Decision-making processes that once involved human judgment, debate, and accountability are increasingly delegated to automated systems. AI-driven tools now influence welfare distribution, insurance eligibility, hiring decisions, credit scoring, content moderation, and law enforcement prioritization. These systems are often opaque to both users and administrators, creating a structure where decisions are followed without understanding, scrutiny, or moral reflection.
The study highlights how this automation fosters what political theorists have described as thoughtlessness. When decisions are perceived as technical outputs rather than political choices, responsibility is diffused and ethical accountability erodes. Individuals who rely on AI systems may comply with outcomes they would otherwise question, assuming that algorithmic decisions are objective, neutral, or inevitable.
AI also enables unprecedented forms of emotional and behavioral manipulation. Through personalized profiling, machine learning systems can predict preferences, fears, and vulnerabilities with high precision. In digital environments, this allows for targeted influence that adapts in real time, shaping beliefs, consumption habits, and political attitudes without overt persuasion. These mechanisms resemble historical fascist propaganda, but with far greater efficiency and reach.
The research further emphasizes that AI systems increasingly simulate authority. Large language models generate confident, coherent outputs that resemble expert judgment, encouraging users to defer rather than critically evaluate. Over time, this dynamic undermines autonomy and weakens the capacity for independent reasoning, a condition that authoritarian systems have historically relied upon.
Big tech power and the revival of corporatist politics
The study draws focus to the role of large technology corporations in enabling technofascist dynamics. The concentration of economic, informational, and infrastructural power in a small number of firms mirrors historical corporatist arrangements in which private industry and state authority become deeply intertwined.
Big Tech companies now control essential communication platforms, data infrastructures, and AI development pipelines. This dominance allows them to shape public discourse, influence political agendas, and resist democratic oversight. The research argues that this concentration of power does not merely reflect market success, but represents a structural shift in how authority is organized in digital societies.
The paper links this development to classic fascist corporatism, where economic elites aligned with political leaders to consolidate power while maintaining the appearance of order and efficiency. In contemporary contexts, technology firms often position themselves as apolitical innovators while simultaneously engaging in extensive lobbying, regulatory capture, and strategic alliances with governments and military institutions.
A key feature of this dynamic is the ideological framing of technology itself. Narratives surrounding artificial general intelligence, accelerationism, and long-term technological salvation function as modern myths that legitimize concentrated power. These narratives portray technological progress as inevitable and morally necessary, discouraging resistance and reframing social costs as acceptable sacrifices for a promised future.
The study argues that these myths resemble historical fascist ideologies in their function, if not their symbolism. They mobilize collective imagination, elevate elite figures as visionary leaders, and marginalize dissent as irrational or anti-progress. In doing so, they help normalize inequality, weaken democratic checks, and justify the prioritization of technological expansion over social welfare.
Importantly, the research notes that this power consolidation often occurs alongside anti-government rhetoric. While technology leaders publicly criticize regulation and state intervention, they remain deeply dependent on public funding, military contracts, and state-backed infrastructure. This contradiction reflects a broader pattern in which corporate and state power converge while democratic institutions are sidelined.
Digital politics, emotional spectacle, and the erosion of democracy
Digital platforms, amplified by AI, play a central role in reshaping political engagement, transforming it from deliberative process into emotional spectacle.
Social media platforms prioritize content that maximizes engagement, often favoring outrage, fear, and polarization. AI-driven recommendation systems amplify divisive narratives, reinforce group identities, and promote simplified us-versus-them thinking. These dynamics mirror key elements of fascist mobilization, which historically relied on emotional resonance rather than rational debate.
The research highlights how digital politics increasingly revolves around leader-follower relationships rather than institutional accountability. Metrics such as follower counts, virality, and algorithmic visibility create hierarchies of influence that resemble personality cults. Political authority becomes performative, measured by attention rather than legitimacy or competence.
At the same time, digital platforms foster what the study describes as the illusion of participation. Users are encouraged to express opinions, react emotionally, and engage symbolically, while structural power relations remain unchanged. This dynamic pacifies dissent by channeling frustration into performative acts that do not translate into institutional change.
AI intensifies these effects by automating content moderation, trend amplification, and narrative shaping. Decisions about what information circulates, which voices are elevated, and which perspectives are suppressed are increasingly embedded in technical systems beyond democratic scrutiny. This process gradually hollow outs the public sphere while preserving the appearance of pluralism.
The research also draws attention to how AI-driven technologies penetrate intimate spheres of life. Chatbots and digital companions promise connection, empathy, and support, particularly in contexts of loneliness and social fragmentation. While framed as solutions to modern isolation, these technologies further entrench dependence on corporate platforms and substitute simulated relationships for genuine social bonds. Historically, fascist movements exploited similar vulnerabilities by offering belonging and meaning in times of alienation.
A call for democratic resistance and structural change
Technofascism is not an inevitable outcome of technological progress, but a political configuration shaped by choices, incentives, and institutional weaknesses. Addressing it requires more than incremental regulation or ethical guidelines. According to the research, meaningful resistance demands structural change in how AI is developed, governed, and integrated into society.
Key strategies include strengthening democratic institutions, limiting corporate concentration, enforcing transparency and accountability in AI systems, and ensuring that technology serves public rather than private power. The paper emphasizes the importance of digital literacy, civic education, and public engagement in countering algorithmic domination.
The research also calls for alternative technological models that prioritize decentralization, democratic oversight, and social justice. Rather than accepting efficiency and convenience as overriding values, societies must reassert human judgment, empathy, and pluralism as foundational principles.
- FIRST PUBLISHED IN:
- Devdiscourse

