From data extraction to climate costs: The hidden colonial roots of AI
Governments and corporations worldwide are developing ethical guidelines to ensure that AI systems are fair, accountable, and safe. However, a growing critique suggests that these efforts may overlook deeper structural inequalities tied to colonial histories and global power imbalances.
In a new paper titled AI Ethics through a Decolonial Lens: What Does AI Ethics Look Like If We Take Seriously the Push to Decolonise It?, published in AI & Society, scholars argue that mainstream AI ethics reproduces colonial hierarchies of thought and governance. The authors warn that without a structural shift toward Global South perspectives, Indigenous knowledge systems, and ecological accountability, AI ethics risks reinforcing the very inequalities it claims to mitigate.
The paper does not simply call for diversity within existing frameworks. Instead, it challenges the epistemological foundations of AI ethics, asserting that many widely adopted principles are shaped by Eurocentric philosophical traditions that overlook plural ways of knowing and being.
Western ethics frameworks under scrutiny
Over the past decade, AI ethics initiatives have multiplied across governments, multinational corporations, and international bodies. Most revolve around familiar principles such as fairness, transparency, accountability, privacy, safety, and human autonomy. Risk-based regulatory models, including those emerging in Europe, have attempted to classify AI systems according to potential harm and impose safeguards accordingly.
The authors argue that these frameworks, while important, are limited in scope. They are typically grounded in Western liberal traditions that treat autonomy as individual self-determination and ethics as a set of universal rules. This framing, the authors suggest, often ignores the cultural, political, and historical situatedness of AI systems.
The paper highlights how many global AI ethics guidelines were drafted with minimal participation from scholars and communities in Africa, Latin America, Central Asia, and parts of Southeast Asia. As a result, concerns central to Global South populations, including extractive labor practices, data exploitation, environmental degradation, and geopolitical power asymmetries, remain peripheral in dominant ethical discourse.
The authors trace this imbalance to the broader logic of coloniality, a concept rooted in Latin American decolonial thought. Coloniality refers not merely to historical colonial rule, but to enduring patterns of power, knowledge production, and economic domination that persist long after formal colonialism ends. Within this framework, AI becomes part of a larger ecosystem shaped by Western dominance in technology, data capitalism, and global governance.
Mainstream AI ethics, the paper argues, often treats harm as a technical glitch that can be resolved through improved datasets, better bias mitigation, or clearer documentation. While such measures are necessary, they do not address structural inequities embedded in global supply chains, digital infrastructures, and knowledge hierarchies.
Data colonialism and the question of sovereignty
AI development mirrors historical patterns of extraction. The authors draw on scholarship describing data colonialism, a process in which data from individuals and communities are harvested, aggregated, and monetized by powerful corporations, often without meaningful consent or equitable benefit sharing.
In this model, Global South communities frequently serve as sources of raw data, low-wage digital labor, and mineral extraction, while economic gains accrue to technology firms headquartered in the Global North. From content moderation workers in Kenya to cobalt mining in the Democratic Republic of Congo, the material and human costs of AI are distributed unevenly.
According to the authors, decolonising AI ethics requires confronting this political economy. Ethical reflection cannot remain confined to algorithmic fairness metrics if it ignores the conditions under which AI systems are trained and deployed.
The authors place particular focus on data sovereignty. For Indigenous communities and marginalized populations, data sovereignty movements seek to assert collective control over how data are collected, stored, analyzed, and shared. Rather than treating data as a commodity owned by corporations or states, these movements frame data governance as a matter of self-determination.
However, the paper acknowledges that data sovereignty alone does not guarantee justice. Infrastructure gaps, cybersecurity vulnerabilities, and global trade regimes complicate efforts to localize control. Furthermore, open data debates raise tensions between accessibility and protection, especially in contexts where data can be weaponized or misused.
The authors also introduce the concept of data poverty, referring to the limited availability of high-quality datasets in many Global South countries. This scarcity restricts the ability of local researchers and institutions to develop AI systems tailored to their own contexts. Meanwhile, large technology companies consolidate massive proprietary datasets, reinforcing asymmetries in innovation capacity.
A decolonial AI ethics, the paper argues, would not only advocate for local data governance but also address structural barriers to equitable AI development. This includes rethinking open sourcing practices, federated learning models, and public infrastructure investments that reduce dependence on corporate monopolies.
AI, ecology, and more-than-human justice
The authors further extend their critique to the ecological dimensions of AI. The authors argue that current AI ethics frameworks remain largely anthropocentric, focusing on human rights and human-centered values while neglecting environmental harm.
Training large-scale AI models demands substantial energy consumption and water use. Data centers rely on extensive cooling systems that strain local water supplies. Hardware production depends on mining rare earth minerals such as cobalt, lithium, and nickel, often extracted in regions facing environmental stress and labor exploitation.
The paper places these processes within what some scholars call techno-molecular colonialism, an extractive logic that treats both natural resources and human behavior as raw material for data-driven systems. This logic intensifies environmental burdens in Global South regions, even as AI is marketed as a tool for sustainability and climate optimization.
The authors argue that decolonising AI ethics requires expanding moral consideration beyond humans to include more-than-human entities and ecosystems. They draw on Indigenous worldviews in which land, body, and spirit are intertwined, challenging the dominant narrative that technology exists to master or control nature.
Such a shift would demand a redefinition of progress. Instead of measuring success by speed, scale, and profitability, a decolonial framework would prioritize ecological sustainability, relational accountability, and slower technological development. The authors acknowledge that this proposition runs counter to the accelerationist logic of global tech markets, but they insist that environmental survival may depend on it.
Rethinking ethics at the global scale
Decolonising AI ethics is not a symbolic gesture. It requires structural transformation in how knowledge is produced, whose voices are heard, and how AI systems are funded and governed.
The authors call for greater inclusion of Global South scholarship in AI ethics debates, not as an afterthought but as a foundational perspective. This includes engaging with theories of relational autonomy that differ from Western liberal models, and recognizing that community-centered values may challenge individualistic assumptions embedded in current frameworks.
They also point to unresolved domains that warrant further inquiry, including AI infrastructure ownership, labor conditions within AI supply chains, and international trade agreements that shape digital sovereignty. Decolonisation, they suggest, must grapple with these material realities rather than remain confined to abstract principles.
- FIRST PUBLISHED IN:
- Devdiscourse

