AI reinforces global inequality as North dominates standards and benefits
In many Global South settings, the absence of comprehensive national regulation forces developers and researchers to shoulder ethical responsibility individually. Without institutional safeguards, practitioners rely on personal judgment, informal collaboration, and ad-hoc approaches to decision-making. This dynamic is particularly visible in regions with limited professional training, weak regulatory institutions, or insufficient funding to support governance structures.
A new international study published in AI & Society warns that the divide between the Global North and Global South is reshaping how AI’s risks and benefits are experienced. The authors argue that practitioners working with AI in Africa, Asia, South America, the Caribbean, and minoritised communities in the West face a profoundly different reality from the one captured in dominant global governance frameworks.
The research, titled Understanding AI and Power: Situated Perspectives from Global North and South Practitioners, investigates how AI professionals across continents interpret the technology’s capabilities, dangers, and social impact. Drawing on interviews with 22 practitioners from research, technical, and governance roles, the study examines how structural inequality, political dynamics, institutional fragility, and global economic forces shape real-world AI development far beyond the rhetoric of innovation and efficiency.
The study makes a simple claim with profound implications: AI is not neutral. It operates within global power structures that determine who benefits, who is burdened, and who gets to define what responsible AI even means.
AI’s meaning depends on power, not algorithms
The research addresses what AI actually represents to those building, regulating, and deploying it across different world regions. The authors find that the majority of practitioners reject the idea of AI as an autonomous or independent force. Instead, they frame AI as a human-shaped technology whose impact is determined by the intentions, capabilities, and political choices of developers, governments, and corporations.
This understanding sharply contrasts with deterministic narratives that depict AI as a self-directing engine of social transformation. While such narratives dominate policy discussions in the Global North, the research shows that in many parts of the Global South, practitioners view AI as deeply embedded in existing social and political arrangements. It is treated less as a futuristic machine and more as an extension of human decision-making, shaped by the priorities of those who design and control it.
In several regions, the study finds a strong focus on AI as a tool for completing practical tasks such as automation, prediction, or classification. Practitioners in education, healthcare, and other public sectors often engage with AI as a means of solving domain-specific problems. However, this applied orientation can narrow awareness of broader societal implications, particularly in institutional settings where sociotechnical reflection is limited. The tendency to treat AI through a technical or procedural lens contributes to disciplinary silos, reinforcing the perception that AI is a logical, rational system rather than a technology embedded within unequal power structures.
This tension shapes how harm is conceptualized. Many practitioners believe that harms arise not from AI itself but from decisions made by those who set system objectives, select data sources, allocate resources, and design institutional infrastructures. In this framing, responsibility lies with human and organizational actors who shape AI’s outcomes. The researchers note that this perspective shifts attention from algorithms to the broader socio-political environment in which AI systems operate.
At the same time, participants recognize that AI systems can legitimize or magnify existing inequalities. When AI tools are deployed in environments already marked by discrimination or asymmetry, they act as amplifiers of harm. This dual recognition, that AI is both shaped by and intensifies structural conditions, runs throughout the study.
Ethical practice varies widely and reflects local constraints
The authors find that ethical reasoning is often a negotiated and context-specific process rather than a stable framework or formalized standard.
In many Global South settings, the absence of comprehensive national regulation forces developers and researchers to shoulder ethical responsibility individually. Without institutional safeguards, practitioners rely on personal judgment, informal collaboration, and ad-hoc approaches to decision-making. This dynamic is particularly visible in regions with limited professional training, weak regulatory institutions, or insufficient funding to support governance structures.
Conversely, practitioners in more regulated environments often draw on formal compliance requirements or organizational guidelines. However, even in high-income countries, the study finds that ethical work is frequently fragmented, with responsibility unevenly distributed across teams and departments. Efforts to ensure fairness or accountability often depend on internal advocacy rather than sector-wide standards, underscoring the patchwork nature of ethical oversight.
The research highlights a recurring tension between innovation and responsibility. Practitioners in both North and South describe pressure to rapidly develop and deploy new AI systems, even when ethical considerations are under-resourced or viewed as secondary to technical or commercial goals. This tension is heightened in regions where economic development goals position AI as a solution to national challenges. Under such conditions, the burden of navigating risk falls disproportionately on individual practitioners who must balance advancement with accountability.
The study also reveals that exposure to social science perspectives can broaden technical practitioners’ understanding of ethical obligations. Interdisciplinary exchanges gradually shift thinking from narrow technical concerns to more complex sociotechnical ones. However, institutional structures often limit opportunities for such engagement, particularly in low-resource environments.
The authors conclude that ethical reasoning is neither universal nor uniform. It is shaped by the availability of regulation, institutional culture, access to expertise, and broader geopolitical conditions. These differences create uneven landscapes of risk and responsibility across the global AI ecosystem.
Global power imbalances shape AI governance and distribution of harm
The researchers find that practitioners across regions consistently point to the dominance of Global North institutions in defining global AI standards, research agendas, and regulatory frameworks.
This dominance manifests in several ways. First, many governance models and ethical guidelines originate from Europe and North America, where political and economic interests heavily shape the priorities embedded in such frameworks. Practitioners in the Global South argue that these models often fail to account for local realities, including resource constraints, historical inequalities, and cultural differences. As a result, imported governance structures frequently misalign with national contexts, producing policies that are either ineffective or irrelevant.
Second, the study identifies a recurring critique that AI in many Global South regions functions through extractive global data and labor economies. Large technology companies often benefit from data sourced in low- and middle-income countries while contributing little to local capacity-building, autonomy, or governance. The researchers describe this pattern as a continuation of historical exploitation, where regions serve as sources of raw material, now data and labor, while decision-making power remains concentrated elsewhere.
Practitioners interviewed for the study connect these dynamics to a broader trajectory of dependency, noting that their countries often act as consumers of AI technologies rather than active shapers of them. Many warn that this imbalance limits sovereign control, fuels inequality, and reinforces epistemic hierarchies that privilege Western expertise.
Yet the research also finds that practitioners are not passive in the face of these asymmetries. Across regions, AI professionals advocate for greater inclusion of local experts, regionally grounded frameworks, and new infrastructures that support autonomy in data and model development. Some call for more radical transformations, including the establishment of independent AI hubs, locally governed compute resources, and governance models rooted in community values.
Alongside these concerns, the study notes pockets of cautious optimism. Practitioners in several regions believe that AI could drive economic growth, strengthen public services, and support social development if designed within equitable governance structures. However, this optimism is tempered by awareness that benefits will remain unevenly distributed unless global power dynamics shift.
- READ MORE ON:
- AI ethics
- global AI governance
- Global South AI
- AI power imbalance
- AI inequality
- responsible AI
- AI policy
- AI regulation
- AI practitioners
- sociotechnical systems
- AI harms
- AI governance frameworks
- digital colonialism
- data extraction
- algorithmic bias
- AI development challenges
- global technology inequality
- AI decision-making
- AI and society
- ethical AI practices
- FIRST PUBLISHED IN:
- Devdiscourse

