Illusion of AI debate: Public controversies mask technocratic power
In today’s media-driven environment, AI controversies are often sparked not by whistleblowers or activists but by the very figures driving the technology’s development. When major scientists and CEOs alternately praise and warn about AI’s potential, the result is what the study terms a “theatre of authority.” Rather than promoting pluralism, these spectacles blur the line between critique and promotion, casting the same elite actors as both innovators and moral guardians.
A team of leading researchers from European universities has issued a stark reassessment of how public debates about artificial intelligence (AI) shape society. Their work, published in Big Data & Society, asserts that the controversies surrounding AI, often presented as moments of democratic accountability, may in fact be tools of technocratic control. The paper is a critical intervention in understanding the politics of AI debate.
Titled “On the Controversiality of AI: The Controversy Is Not the Situation,” the study challenges a central assumption of modern technology discourse: that public controversy automatically translates into public engagement. Instead, the authors argue that AI controversies are increasingly shaped by elite actors, corporate executives, scientists, and policymakers, who use public warnings and staged debates to reinforce their authority rather than open it to scrutiny.
When AI debate becomes a performance of power
The research places AI controversies within a long tradition of public disputes over science and technology, from nuclear energy to genetic modification. Historically, such controversies were seen as democratic moments that invited citizens to question scientific authority. The authors argue that AI marks a departure from this pattern.
In today’s media-driven environment, AI controversies are often sparked not by whistleblowers or activists but by the very figures driving the technology’s development. When major scientists and CEOs alternately praise and warn about AI’s potential, the result is what the study terms a “theatre of authority.” Rather than promoting pluralism, these spectacles blur the line between critique and promotion, casting the same elite actors as both innovators and moral guardians.
According to the authors, this dynamic turns AI controversies into performances—events that simulate openness while ultimately narrowing the space for genuine democratic participation. Public fears about automation, surveillance, and bias are reframed as technical challenges best handled by experts, while social and political dimensions remain unaddressed.
The study identifies this shift as a new stage in the relationship between technology and power: one where the spectacle of controversy serves to stabilize, not disrupt, existing hierarchies.
The difference between controversy and situation
The authors argue that the visible debates surrounding AI, such as disputes over existential risk, deepfakes, or algorithmic bias, often fail to reflect the lived realities and structural consequences of the technology.
In this framework, the controversy is not necessarily the situation; rather, it may conceal it. While controversy plays out in headlines and public hearings, the real situations involve the everyday entanglements of AI with labor, inequality, governance, and surveillance.
The authors outline four possible relationships between controversy and situation:
- The controversy conceals the situation, diverting attention from systemic exploitation, such as data labor or supply chain inequalities.
- The controversy articulates the situation, as in the exposure of algorithmic bias or facial recognition abuse.
- The situation articulates the controversy, where social tensions, like mistrust in government or media, shape how AI issues are contested.
- The controversy and the situation are disconnected, with debates serving as intellectual exercises detached from practical concerns.
This analytical model shifts the focus from studying controversies as self-contained events to understanding them as expressions or distortions of deeper socio-political conditions. It calls for researchers and policymakers to look beyond the spectacle and examine the hidden infrastructures, data economies, labor practices, and institutional incentives, that sustain the AI ecosystem.
From democratic debate to technocratic spectacle
The study warns that AI controversies are undergoing a process of “authoritarianization”- a term the authors use to describe how apparent openness can mask centralized control. Public concern over AI’s dangers, rather than curbing its spread, is frequently used to accelerate policy and investment in its development.
The authors note how governments and corporations invoke the urgency of regulation as a justification for expanding AI deployment. This paradox, where fear fuels innovation, illustrates how the politics of AI is being reorganized around spectacle and control.
Drawing on examples from recent global debates, the paper shows that the framing of AI as an existential risk often sidelines discussions about labor exploitation, environmental costs, and social justice. By focusing on hypothetical future threats, elites avoid addressing the immediate and tangible harms caused by algorithmic systems in policing, welfare, and employment.
The authors argue that such controversies create an illusion of democratic oversight while reinforcing technocratic governance. When experts monopolize the narrative, the public is left as an audience, not a participant, in shaping the ethics and direction of AI.
The article also revisits the tradition of controversy analysis in science and technology studies (STS). Earlier scholarship viewed controversies as opportunities to map networks of actors and foster pluralism. The authors argue that this framework must now evolve to confront the political instrumentalization of controversy itself. In the era of algorithmic publicity, dominated by hype cycles, influencer experts, and corporate communications, neutral observation risks reproducing the very hierarchies it seeks to critique.
Reclaiming the politics of controversy
Despite its critique, the study does not dismiss the potential of controversy altogether. Marres and her co-authors propose reworking controversy analysis into a more situated and participatory practice. This involves three key directions:
- Recovering multiplicity, by distinguishing between orchestrated media debates and the diverse local conflicts surrounding AI implementation.
- Integrating participatory and design-based methods, such as citizen-led data audits and creative engagements that translate AI’s impact into material, relatable forms.
- Examining friction and resistance, focusing on small-scale tensions, workplace automation, data privacy, and algorithmic fairness, that reveal how power is negotiated in everyday life.
Through this lens, AI controversies can still serve democratic ends, but only when they are grounded in lived experience rather than mediated spectacle.
The politics of AI will not be transformed through grand public debates alone. Instead, progress depends on rebuilding accountability from the ground up, by enabling affected communities to articulate their own positions and redefine what counts as a legitimate issue of concern.
- FIRST PUBLISHED IN:
- Devdiscourse

