AI workers hold rising geopolitical power as tech giants outpace regulation

The author argues that governments often prioritize economic growth, national security, and competitive advantage in the global AI race. As a result, regulators frequently hesitate to impose strict limits on large AI companies. This dynamic creates a structure in which states depend on private sector innovation while lacking the leverage to enforce strong guardrails. The paper notes that this dependency weakens the ability of international frameworks to control AI risks such as surveillance expansion, automated information manipulation, military applications, and transnational data exploitation.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-11-2025 10:49 IST | Created: 29-11-2025 10:49 IST
AI workers hold rising geopolitical power as tech giants outpace regulation
Representative Image. Credit: ChatGPT

The world’s current approach to governing artificial intelligence technologies is not strong enough to match the growing influence of major AI companies, warns a new research analysis.

The study, titled “AI Workers, Geopolitics, and Algorithmic Collective Action” by Sydney Reis, argues that while governments race to shape national and international AI strategies, powerful technology firms continue to expand their reach, often moving faster than policymakers can respond. The paper states that this mismatch has created a new political landscape where AI companies operate in ways that resemble state-level actors, influencing global affairs through their resources, technology, and control over emerging systems.

The author’s analysis introduces an alternative path for strengthening AI governance. Instead of relying solely on top-down policymaking, the research calls for direct attention to the people inside AI labs who design and develop advanced systems. These workers, often portrayed as engineers or researchers with little political influence, are identified as having the potential to shape global outcomes because they operate at the heart of the technologies that governments, militaries, corporations, and public institutions increasingly depend on.

Tech power rises as traditional regulatory tools fall behind

The study outlines how rapid advances in AI and the sharp concentration of power in a few companies have reshaped the global political environment. The author uses concepts from International Political Economy to explain how large technology firms have accumulated influence comparable to nation-states. According to the analysis, these firms control essential infrastructure, hold massive financial and data resources, and participate in geopolitical negotiations in subtle but significant ways.

The author argues that governments often prioritize economic growth, national security, and competitive advantage in the global AI race. As a result, regulators frequently hesitate to impose strict limits on large AI companies. This dynamic creates a structure in which states depend on private sector innovation while lacking the leverage to enforce strong guardrails. The paper notes that this dependency weakens the ability of international frameworks to control AI risks such as surveillance expansion, automated information manipulation, military applications, and transnational data exploitation.

The research stresses that global competition intensifies these pressures. Countries view AI breakthroughs as strategic assets, and this perspective encourages them to support national champions rather than constrain them. The study says this situation mirrors earlier eras when state-industry alliances shaped geopolitical outcomes, but warns that today’s AI tools have far wider social reach and operate at speeds that outpace traditional political processes.

The study identifies further challenges in global governance, including slow multilateral coordination, fragmented policy agendas, and the difficulty of enforcing international agreements across diverse political systems. These structural problems, the paper argues, allow large AI companies to influence regulatory timelines, shape global standards, and effectively operate across borders with minimal oversight. In this environment, top-down regulation struggles to keep pace with the technical complexity and speed of AI development.

AI workers identified as a new geopolitical force inside technology labs

To address the gaps in formal governance, the study shifts focus from state institutions to the people who create the algorithms that underpin modern AI systems. The author argues that AI workers are not only skilled technical staff but also strategic actors whose decisions affect global power relations. They design systems that shape communication, economic behavior, political narratives, and surveillance capabilities. This gives them unique leverage at a time when digital tools influence statecraft, conflict, markets, and civic life.

The study provides historical context by noting that AI worker activism is not new. Previous incidents include organized resistance to military partnerships, surveillance contracts, and projects seen as contributing to human rights concerns. These actions, the research suggests, show that AI workers have already demonstrated the capacity to influence corporate decisions from within. Their insider knowledge, technical authority, and strategic positions allow them to identify harmful developments long before regulators become aware of them.

The author frames AI workers as having a type of soft geopolitical power. They operate inside organizations whose products shape international dynamics and whose decisions can steer political narratives or military capabilities. Because these workers understand the technology better than regulators or the broader public, they are uniquely equipped to challenge dangerous projects, expose structural risks, or redirect development toward safer pathways.

The paper warns, however, that AI worker resistance is often fragmented and short-lived. Workers face professional risks, corporate pressure, and limited organizational support. Their influence is real but unstable, and without structured support systems, their efforts risk being overlooked or diffused. The study argues that, despite these challenges, AI workers remain among the most important under-acknowledged groups in the global AI landscape.

The author links this idea to the broader field of algorithmic labor, which includes platform workers, data laborers, gig workers, and other groups affected by algorithmic management. However, the study emphasizes that AI lab workers occupy a different strategic position. They do not only interact with algorithms but build them, and this creates distinct opportunities for shaping the direction of AI in ways that can reduce social harm.

A call for new collective action and participatory design to strengthen AI governance

The study introduces a framework that positions AI workers within the emerging field of Algorithmic Collective Action. This field generally explores how groups affected by algorithms can coordinate to shift power balances, demand protections, or shape technology outcomes. Past efforts have focused on groups with limited leverage, but The author argues that AI workers represent a distinct category of actors with significant structural influence. They can detect harmful trends early, understand technical risks deeply, and collectively apply pressure on corporate leadership.

To support this type of action, the paper recommends applying Participatory Design methods within AI labs. These methods emphasize collaboration, co-creation, and shared decision-making. The goal is to create internal structures that help workers reflect on the geopolitical impacts of their work, evaluate ethical dilemmas, identify emerging risks, and organize around shared values. Unlike compliance-oriented interventions, Participatory Design focuses on building spaces where meaningful reflection and coordinated action can grow.

The study explains that these participatory approaches should not produce static tools or checklists. Instead, they should support adaptable processes that evolve as technology changes. They would invite AI workers to examine how their decisions intersect with geopolitics, national strategies, social inequality, and systemic risk. Through structured reflection, workers may become more aware of how everyday technical tasks contribute to broader political landscapes.

The author argues that this framework could become a powerful complement to state-led regulation. Formal policies often struggle to evolve as quickly as AI systems. Internal worker-driven interventions could act as early warning mechanisms and provide ethical pressure where formal oversight is slow or incomplete. The study suggests that by empowering workers, organizations may also reduce internal conflicts, improve long-term governance, and build a more stable foundation for responsible innovation.

The future of AI governance, as the study suggests, will likely require a balance between state oversight and worker-driven internal action. As AI continues to expand into all areas of public life, individuals inside labs will face growing moral and geopolitical responsibilities. Their potential influence, the study says, should be recognized as an essential part of global AI governance.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback