Big tech’s algorithmic power echoes medieval and enlightenment-era knowledge regimes

GenAI systems rely on algorithmic control mechanisms that shape public discourse while remaining opaque to users and regulators. Moderation pipelines combine supervised classifiers, pretrained models, policy engines, and rule graphs that govern what content is allowed or suppressed. Because these systems operate behind closed interfaces, users often discover restrictions only after engagement plummets or accounts are flagged.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 06-12-2025 22:37 IST | Created: 06-12-2025 22:37 IST
Big tech’s algorithmic power echoes medieval and enlightenment-era knowledge regimes
Representative Image. Credit: ChatGPT

A new academic investigation has issued a stark warning that generative artificial intelligence is rapidly becoming a dominant global authority over knowledge, reshaping the rules of information governance and echoing historical patterns of centralized control. The research argues that modern AI systems now influence how billions of people encounter news, search results, educational material, and political content, creating a structural shift in power that mirrors, and in some ways magnifies, the epistemic control once held by religious and scientific institutions throughout history.

The analysis, “Will Power Return to the Clouds? From Divine Authority to GenAI Authority,” compares medieval Church power, Enlightenment scientific authority, and Big Tech’s algorithmic dominance. The authors warn that generative AI platforms have become new gatekeepers of truth, embedding opaque decision-making systems into everyday digital life. Their findings suggest that unless comprehensive governance mechanisms are implemented, AI may harden into an unprecedented form of digital orthodoxy with global consequences.

AI emerges as a modern epistemic authority with historical parallels

Generative AI represents a new phase in the long evolution of authority over knowledge. Throughout history, societies relied on powerful institutions to define what counted as legitimate truth. In medieval Europe, this authority rested in religious doctrine. During the Enlightenment, it shifted to scientific institutions built on rational-legal principles. Today, generative AI platforms, controlled by a small group of private firms, determine which voices are amplified, moderated, or suppressed across digital spaces.

The researchers outline how generative AI systems increasingly decide what information reaches the public. Large language models filter search results, summarize news, generate content, and moderate online interactions. Their decisions stem from vast training datasets, algorithmic policies, and content-filtering mechanisms that operate at speeds and scales unmatched by historical authorities.

To analyze this transformation, the authors integrate three major theoretical frameworks. First, Michel Foucault’s power/knowledge concept helps explain how modern AI platforms define truth through automated moderation and training-data curation, similar to how past institutions guarded doctrinal authority. Second, Max Weber’s authority typology is expanded to include rational-technical and emerging agentic-technical authority, reflecting how algorithms now derive legitimacy from computational optimization rather than theology or legal procedure. Third, Luciano Floridi’s Dataism highlights the growing centrality of data itself as the foundation for truth claims in digital societies.

The study identifies five recurring patterns of epistemic authority across eras: disciplinary power, authority modality, data inclusivity, trust-versus-reliance dynamics, and pathways of resistance. Despite differences in context, the authors find structural continuities between the Church’s power over medieval knowledge, Enlightenment-era scientific gatekeeping, and Big Tech’s algorithmic governance.

A key historical comparison draws from the Galileo Affair, a defining moment when religious authority suppressed scientific discovery. The study shows how this mirrors today’s algorithmic moderation dynamics, where content can be down-ranked, demonetized, or removed based on opaque policies. While modern platforms do not impose spiritual penalties, the structural effect is similar: controlling the circulation of ideas and narrowing the boundaries of permissible knowledge.

The researchers note that unlike historical authorities, generative AI systems operate globally, instantly, and invisibly. Where doctrinal shifts once took decades, algorithmic updates now propagate across billions of digital feeds in minutes. This speed introduces new risks, especially when combined with the limited transparency of AI training data, the scarcity of representation for minority languages, and the rise of automated policy engines capable of silently adjusting content-filtering rules.

Algorithmic control, data inequities, and the deepening trust gap

GenAI systems rely on algorithmic control mechanisms that shape public discourse while remaining opaque to users and regulators. Moderation pipelines combine supervised classifiers, pretrained models, policy engines, and rule graphs that govern what content is allowed or suppressed. Because these systems operate behind closed interfaces, users often discover restrictions only after engagement plummets or accounts are flagged.

The researchers note that this creates a growing trust-reliance gap. Across global surveys, users rely on AI systems for news, search queries, and productivity tasks but express decreasing trust in their fairness, accuracy, and ethical alignment. This gap destabilizes the legitimacy of AI authority, particularly as misinformation concerns and ethical violations become more visible.

The study highlights representation gaps in AI training data as one of the main ethical concerns. English dominates more than half of major model corpora, while many African and Indigenous languages account for less than one percent. This imbalance leads to higher error rates, biased moderation outcomes, and misclassification of culturally specific expressions. Such dynamics mirror historical exclusions in which only certain groups’ knowledge was deemed valid or visible.

AI-generated misinformation and synthetic content further challenge epistemic stability. The authors note that deepfake technologies create highly realistic images and videos that erode trust in evidence itself. As synthetic media becomes indistinguishable from authentic recordings, societies risk experiencing the same crisis of legitimacy once triggered by religious or ideological conflicts over truth.

The analysis warns that algorithmic systems can reinforce negative feedback loops. Biased training data produces biased outputs, which then influence user behavior, further shaping future datasets. This cyclical dynamic parallels historical patterns in which entrenched institutions shaped knowledge in ways that reinforced their own authority while excluding marginalized perspectives.

The study also contrasts modern algorithmic control with historical oversight systems. Medieval heresy trials and Enlightenment scientific review processes were publicly documented, whereas AI moderation practices are often invisible and lack procedural transparency. This opacity makes it difficult for users, regulators, and researchers to understand why certain content is removed or promoted.

Resistance to generative AI authority is already emerging through open-source alternatives, legislative efforts, public audits, and civic-technology initiatives. However, the authors caution that resistance is structurally disadvantaged because Big Tech companies control training data, proprietary models, and global distribution platforms. This imbalance echoes past periods when centralized authorities controlled access to sacred texts or scientific forums.

Governance blueprint calls for transparency, inclusivity, and AI literacy

To address the risks associated with generative AI authority, the study proposes a four-pillar governance framework aimed at preventing the formation of a global digital orthodoxy. The authors argue that effective governance must account for the historical lessons of past authority regimes while adapting to the rapid evolution of modern AI.

The first pillar is a mandatory international model registry, which would document the architecture, training data sources, and policy logs of large AI models. This measure would support transparency, enable independent audits, and provide a historical record of algorithmic changes. The authors compare this approach to scientific reporting standards that emerged during the Enlightenment, noting that visibility is essential for accountability.

The second pillar calls for representation quotas and regional observatories to address linguistic and cultural inequities in model training. At least 30 percent of seats on AI standards committees, according to the study, should be dedicated to Global South stakeholders, Indigenous communities, and linguistic minorities. This effort seeks to counterbalance the dominance of English-language datasets and ensure that diverse perspectives shape AI development.

The third pillar focuses on critical-AI literacy, which the authors view as necessary for rebuilding public trust. They propose integrating AI ethics and governance education into national curricula, offering community workshops, and promoting public-facing resources that explain how AI models make decisions. Increasing public understanding would help users discern between reliance and genuine trust, reducing the vulnerability to misinformation.

The fourth pillar is based on community-led data trusts, enabling marginalized groups to curate, manage, and license their own datasets. This initiative offers a mechanism for diversifying training data at scale while maintaining respectful and community-controlled data governance. It also directly addresses concerns about data colonialism by allowing communities to participate in shaping model inputs.

The authors argue that these measures collectively aim to rebalance the relationship between centralized AI authority and democratic oversight. The goal is not to eliminate algorithmic efficiency but to ensure that the benefits of AI do not come at the cost of epistemic justice or global inclusivity.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback